00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-main" build number 4088 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3678 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.092 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.093 The recommended git tool is: git 00:00:00.093 using credential 00000000-0000-0000-0000-000000000002 00:00:00.096 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.127 Fetching changes from the remote Git repository 00:00:00.133 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.168 Using shallow fetch with depth 1 00:00:00.168 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.168 > git --version # timeout=10 00:00:00.196 > git --version # 'git version 2.39.2' 00:00:00.196 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.218 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.218 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.796 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.808 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.820 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.820 > git config core.sparsecheckout # timeout=10 00:00:04.832 > git read-tree -mu HEAD # timeout=10 00:00:04.847 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.876 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.877 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.963 [Pipeline] Start of Pipeline 00:00:04.977 [Pipeline] library 00:00:04.978 Loading library shm_lib@master 00:00:04.979 Library shm_lib@master is cached. Copying from home. 00:00:04.994 [Pipeline] node 00:00:05.008 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:05.010 [Pipeline] { 00:00:05.021 [Pipeline] catchError 00:00:05.023 [Pipeline] { 00:00:05.036 [Pipeline] wrap 00:00:05.044 [Pipeline] { 00:00:05.053 [Pipeline] stage 00:00:05.054 [Pipeline] { (Prologue) 00:00:05.072 [Pipeline] echo 00:00:05.073 Node: VM-host-SM38 00:00:05.079 [Pipeline] cleanWs 00:00:05.089 [WS-CLEANUP] Deleting project workspace... 00:00:05.089 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.097 [WS-CLEANUP] done 00:00:05.276 [Pipeline] setCustomBuildProperty 00:00:05.373 [Pipeline] httpRequest 00:00:06.570 [Pipeline] echo 00:00:06.571 Sorcerer 10.211.164.101 is alive 00:00:06.580 [Pipeline] retry 00:00:06.581 [Pipeline] { 00:00:06.593 [Pipeline] httpRequest 00:00:06.598 HttpMethod: GET 00:00:06.598 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.599 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.600 Response Code: HTTP/1.1 200 OK 00:00:06.601 Success: Status code 200 is in the accepted range: 200,404 00:00:06.601 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.083 [Pipeline] } 00:00:07.096 [Pipeline] // retry 00:00:07.102 [Pipeline] sh 00:00:07.385 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.400 [Pipeline] httpRequest 00:00:07.810 [Pipeline] echo 00:00:07.812 Sorcerer 10.211.164.101 is alive 00:00:07.822 [Pipeline] retry 00:00:07.824 [Pipeline] { 00:00:07.839 [Pipeline] httpRequest 00:00:07.844 HttpMethod: GET 00:00:07.844 URL: http://10.211.164.101/packages/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:00:07.845 Sending request to url: http://10.211.164.101/packages/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:00:07.847 Response Code: HTTP/1.1 200 OK 00:00:07.847 Success: Status code 200 is in the accepted range: 200,404 00:00:07.848 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:00:24.432 [Pipeline] } 00:00:24.450 [Pipeline] // retry 00:00:24.458 [Pipeline] sh 00:00:24.744 + tar --no-same-owner -xf spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:00:28.066 [Pipeline] sh 00:00:28.408 + git -C spdk log --oneline -n5 00:00:28.408 35cd3e84d bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:00:28.408 01a2c4855 bdev/passthru: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:00:28.408 9094b9600 bdev: Assert to check if I/O pass dif_check_flags not enabled by bdev 00:00:28.408 2e10c84c8 nvmf: Expose DIF type of namespace to host again 00:00:28.409 38b931b23 nvmf: Set bdev_ext_io_opts::dif_check_flags_exclude_mask for read/write 00:00:28.431 [Pipeline] withCredentials 00:00:28.444 > git --version # timeout=10 00:00:28.459 > git --version # 'git version 2.39.2' 00:00:28.477 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:28.479 [Pipeline] { 00:00:28.490 [Pipeline] retry 00:00:28.492 [Pipeline] { 00:00:28.510 [Pipeline] sh 00:00:28.796 + git ls-remote http://dpdk.org/git/dpdk main 00:00:28.810 [Pipeline] } 00:00:28.830 [Pipeline] // retry 00:00:28.836 [Pipeline] } 00:00:28.848 [Pipeline] // withCredentials 00:00:28.857 [Pipeline] httpRequest 00:00:29.266 [Pipeline] echo 00:00:29.268 Sorcerer 10.211.164.101 is alive 00:00:29.278 [Pipeline] retry 00:00:29.281 [Pipeline] { 00:00:29.298 [Pipeline] httpRequest 00:00:29.304 HttpMethod: GET 00:00:29.304 URL: http://10.211.164.101/packages/dpdk_4843aacb0d1201fef37e8a579fcd8baec4acdf98.tar.gz 00:00:29.305 Sending request to url: http://10.211.164.101/packages/dpdk_4843aacb0d1201fef37e8a579fcd8baec4acdf98.tar.gz 00:00:29.325 Response Code: HTTP/1.1 200 OK 00:00:29.326 Success: Status code 200 is in the accepted range: 200,404 00:00:29.327 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_4843aacb0d1201fef37e8a579fcd8baec4acdf98.tar.gz 00:01:09.857 [Pipeline] } 00:01:09.876 [Pipeline] // retry 00:01:09.884 [Pipeline] sh 00:01:10.171 + tar --no-same-owner -xf dpdk_4843aacb0d1201fef37e8a579fcd8baec4acdf98.tar.gz 00:01:11.571 [Pipeline] sh 00:01:11.854 + git -C dpdk log --oneline -n5 00:01:11.854 4843aacb0d doc: describe send scheduling counters in mlx5 guide 00:01:11.854 a4f455560f version: 24.11-rc4 00:01:11.854 0c81db5870 dts: remove leftover node methods 00:01:11.854 71eae7fe3e doc: correct definition of stats per queue feature 00:01:11.854 f2b1510f19 net/octeon_ep: replace use of word segregate 00:01:11.874 [Pipeline] writeFile 00:01:11.890 [Pipeline] sh 00:01:12.176 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:12.190 [Pipeline] sh 00:01:12.476 + cat autorun-spdk.conf 00:01:12.476 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.476 SPDK_TEST_NVME=1 00:01:12.476 SPDK_TEST_FTL=1 00:01:12.476 SPDK_TEST_ISAL=1 00:01:12.476 SPDK_RUN_ASAN=1 00:01:12.476 SPDK_RUN_UBSAN=1 00:01:12.476 SPDK_TEST_XNVME=1 00:01:12.476 SPDK_TEST_NVME_FDP=1 00:01:12.476 SPDK_TEST_NATIVE_DPDK=main 00:01:12.476 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:12.476 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:12.483 RUN_NIGHTLY=1 00:01:12.485 [Pipeline] } 00:01:12.499 [Pipeline] // stage 00:01:12.514 [Pipeline] stage 00:01:12.516 [Pipeline] { (Run VM) 00:01:12.529 [Pipeline] sh 00:01:12.813 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:12.813 + echo 'Start stage prepare_nvme.sh' 00:01:12.813 Start stage prepare_nvme.sh 00:01:12.813 + [[ -n 5 ]] 00:01:12.813 + disk_prefix=ex5 00:01:12.813 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:12.813 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:12.813 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:12.813 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.813 ++ SPDK_TEST_NVME=1 00:01:12.813 ++ SPDK_TEST_FTL=1 00:01:12.813 ++ SPDK_TEST_ISAL=1 00:01:12.813 ++ SPDK_RUN_ASAN=1 00:01:12.813 ++ SPDK_RUN_UBSAN=1 00:01:12.813 ++ SPDK_TEST_XNVME=1 00:01:12.813 ++ SPDK_TEST_NVME_FDP=1 00:01:12.813 ++ SPDK_TEST_NATIVE_DPDK=main 00:01:12.813 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:12.813 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:12.813 ++ RUN_NIGHTLY=1 00:01:12.813 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:12.813 + nvme_files=() 00:01:12.813 + declare -A nvme_files 00:01:12.813 + backend_dir=/var/lib/libvirt/images/backends 00:01:12.813 + nvme_files['nvme.img']=5G 00:01:12.813 + nvme_files['nvme-cmb.img']=5G 00:01:12.813 + nvme_files['nvme-multi0.img']=4G 00:01:12.813 + nvme_files['nvme-multi1.img']=4G 00:01:12.813 + nvme_files['nvme-multi2.img']=4G 00:01:12.813 + nvme_files['nvme-openstack.img']=8G 00:01:12.813 + nvme_files['nvme-zns.img']=5G 00:01:12.813 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:12.813 + (( SPDK_TEST_FTL == 1 )) 00:01:12.813 + nvme_files["nvme-ftl.img"]=6G 00:01:12.813 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:12.813 + nvme_files["nvme-fdp.img"]=1G 00:01:12.813 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:12.813 + for nvme in "${!nvme_files[@]}" 00:01:12.814 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:01:12.814 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:12.814 + for nvme in "${!nvme_files[@]}" 00:01:12.814 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-ftl.img -s 6G 00:01:13.076 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:13.076 + for nvme in "${!nvme_files[@]}" 00:01:13.076 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:01:13.076 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:13.076 + for nvme in "${!nvme_files[@]}" 00:01:13.076 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:01:13.076 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:13.076 + for nvme in "${!nvme_files[@]}" 00:01:13.076 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:01:14.021 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:14.021 + for nvme in "${!nvme_files[@]}" 00:01:14.021 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:01:14.021 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:14.021 + for nvme in "${!nvme_files[@]}" 00:01:14.021 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:01:14.021 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:14.021 + for nvme in "${!nvme_files[@]}" 00:01:14.021 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-fdp.img -s 1G 00:01:14.021 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:14.021 + for nvme in "${!nvme_files[@]}" 00:01:14.021 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:01:14.594 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:14.594 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:01:14.594 + echo 'End stage prepare_nvme.sh' 00:01:14.594 End stage prepare_nvme.sh 00:01:14.608 [Pipeline] sh 00:01:14.892 + DISTRO=fedora39 00:01:14.892 + CPUS=10 00:01:14.892 + RAM=12288 00:01:14.892 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:14.892 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex5-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:14.892 00:01:14.892 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:14.892 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:14.892 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:14.892 HELP=0 00:01:14.892 DRY_RUN=0 00:01:14.892 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme-ftl.img,/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,/var/lib/libvirt/images/backends/ex5-nvme-fdp.img, 00:01:14.892 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:14.892 NVME_AUTO_CREATE=0 00:01:14.892 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,, 00:01:14.892 NVME_CMB=,,,, 00:01:14.892 NVME_PMR=,,,, 00:01:14.892 NVME_ZNS=,,,, 00:01:14.892 NVME_MS=true,,,, 00:01:14.892 NVME_FDP=,,,on, 00:01:14.892 SPDK_VAGRANT_DISTRO=fedora39 00:01:14.892 SPDK_VAGRANT_VMCPU=10 00:01:14.892 SPDK_VAGRANT_VMRAM=12288 00:01:14.892 SPDK_VAGRANT_PROVIDER=libvirt 00:01:14.892 SPDK_VAGRANT_HTTP_PROXY= 00:01:14.892 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:14.892 SPDK_OPENSTACK_NETWORK=0 00:01:14.892 VAGRANT_PACKAGE_BOX=0 00:01:14.892 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:14.892 FORCE_DISTRO=true 00:01:14.892 VAGRANT_BOX_VERSION= 00:01:14.892 EXTRA_VAGRANTFILES= 00:01:14.892 NIC_MODEL=e1000 00:01:14.892 00:01:14.892 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:14.892 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:17.435 Bringing machine 'default' up with 'libvirt' provider... 00:01:17.694 ==> default: Creating image (snapshot of base box volume). 00:01:17.954 ==> default: Creating domain with the following settings... 00:01:17.954 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732871925_8d5ee4424c4196ae92dd 00:01:17.954 ==> default: -- Domain type: kvm 00:01:17.954 ==> default: -- Cpus: 10 00:01:17.954 ==> default: -- Feature: acpi 00:01:17.954 ==> default: -- Feature: apic 00:01:17.954 ==> default: -- Feature: pae 00:01:17.954 ==> default: -- Memory: 12288M 00:01:17.954 ==> default: -- Memory Backing: hugepages: 00:01:17.954 ==> default: -- Management MAC: 00:01:17.954 ==> default: -- Loader: 00:01:17.954 ==> default: -- Nvram: 00:01:17.954 ==> default: -- Base box: spdk/fedora39 00:01:17.954 ==> default: -- Storage pool: default 00:01:17.954 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732871925_8d5ee4424c4196ae92dd.img (20G) 00:01:17.954 ==> default: -- Volume Cache: default 00:01:17.954 ==> default: -- Kernel: 00:01:17.954 ==> default: -- Initrd: 00:01:17.954 ==> default: -- Graphics Type: vnc 00:01:17.954 ==> default: -- Graphics Port: -1 00:01:17.954 ==> default: -- Graphics IP: 127.0.0.1 00:01:17.954 ==> default: -- Graphics Password: Not defined 00:01:17.954 ==> default: -- Video Type: cirrus 00:01:17.954 ==> default: -- Video VRAM: 9216 00:01:17.954 ==> default: -- Sound Type: 00:01:17.954 ==> default: -- Keymap: en-us 00:01:17.954 ==> default: -- TPM Path: 00:01:17.954 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:17.954 ==> default: -- Command line args: 00:01:17.954 ==> default: -> value=-device, 00:01:17.954 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:17.954 ==> default: -> value=-drive, 00:01:17.954 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:17.954 ==> default: -> value=-device, 00:01:17.954 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:17.954 ==> default: -> value=-device, 00:01:17.954 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:17.954 ==> default: -> value=-drive, 00:01:17.954 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-1-drive0, 00:01:17.954 ==> default: -> value=-device, 00:01:17.954 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:17.954 ==> default: -> value=-device, 00:01:17.955 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:17.955 ==> default: -> value=-drive, 00:01:17.955 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:17.955 ==> default: -> value=-device, 00:01:17.955 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:17.955 ==> default: -> value=-drive, 00:01:17.955 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:17.955 ==> default: -> value=-device, 00:01:17.955 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:17.955 ==> default: -> value=-drive, 00:01:17.955 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:17.955 ==> default: -> value=-device, 00:01:17.955 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:17.955 ==> default: -> value=-device, 00:01:17.955 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:17.955 ==> default: -> value=-device, 00:01:17.955 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:17.955 ==> default: -> value=-drive, 00:01:17.955 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:17.955 ==> default: -> value=-device, 00:01:17.955 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:18.216 ==> default: Creating shared folders metadata... 00:01:18.216 ==> default: Starting domain. 00:01:19.602 ==> default: Waiting for domain to get an IP address... 00:01:37.722 ==> default: Waiting for SSH to become available... 00:01:37.722 ==> default: Configuring and enabling network interfaces... 00:01:41.031 default: SSH address: 192.168.121.105:22 00:01:41.031 default: SSH username: vagrant 00:01:41.031 default: SSH auth method: private key 00:01:42.950 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:51.098 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:01:56.390 ==> default: Mounting SSHFS shared folder... 00:01:57.818 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:57.818 ==> default: Checking Mount.. 00:01:59.227 ==> default: Folder Successfully Mounted! 00:01:59.227 00:01:59.227 SUCCESS! 00:01:59.227 00:01:59.227 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:59.227 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:59.227 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:59.227 00:01:59.237 [Pipeline] } 00:01:59.253 [Pipeline] // stage 00:01:59.262 [Pipeline] dir 00:01:59.263 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:01:59.265 [Pipeline] { 00:01:59.277 [Pipeline] catchError 00:01:59.279 [Pipeline] { 00:01:59.292 [Pipeline] sh 00:01:59.590 + vagrant ssh-config --host vagrant 00:01:59.590 + sed -ne '/^Host/,$p' 00:01:59.590 + tee ssh_conf 00:02:02.891 Host vagrant 00:02:02.891 HostName 192.168.121.105 00:02:02.891 User vagrant 00:02:02.891 Port 22 00:02:02.891 UserKnownHostsFile /dev/null 00:02:02.891 StrictHostKeyChecking no 00:02:02.891 PasswordAuthentication no 00:02:02.891 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:02.891 IdentitiesOnly yes 00:02:02.891 LogLevel FATAL 00:02:02.891 ForwardAgent yes 00:02:02.891 ForwardX11 yes 00:02:02.891 00:02:02.906 [Pipeline] withEnv 00:02:02.908 [Pipeline] { 00:02:02.922 [Pipeline] sh 00:02:03.213 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:03.213 source /etc/os-release 00:02:03.213 [[ -e /image.version ]] && img=$(< /image.version) 00:02:03.213 # Minimal, systemd-like check. 00:02:03.213 if [[ -e /.dockerenv ]]; then 00:02:03.213 # Clear garbage from the node'\''s name: 00:02:03.213 # agt-er_autotest_547-896 -> autotest_547-896 00:02:03.213 # $HOSTNAME is the actual container id 00:02:03.213 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:03.213 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:03.213 # We can assume this is a mount from a host where container is running, 00:02:03.213 # so fetch its hostname to easily identify the target swarm worker. 00:02:03.213 container="$(< /etc/hostname) ($agent)" 00:02:03.213 else 00:02:03.213 # Fallback 00:02:03.213 container=$agent 00:02:03.213 fi 00:02:03.213 fi 00:02:03.213 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:03.213 ' 00:02:03.488 [Pipeline] } 00:02:03.506 [Pipeline] // withEnv 00:02:03.513 [Pipeline] setCustomBuildProperty 00:02:03.522 [Pipeline] stage 00:02:03.523 [Pipeline] { (Tests) 00:02:03.534 [Pipeline] sh 00:02:03.817 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:04.094 [Pipeline] sh 00:02:04.380 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:04.660 [Pipeline] timeout 00:02:04.661 Timeout set to expire in 50 min 00:02:04.663 [Pipeline] { 00:02:04.679 [Pipeline] sh 00:02:04.965 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:05.539 HEAD is now at 35cd3e84d bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:02:05.555 [Pipeline] sh 00:02:05.842 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:06.123 [Pipeline] sh 00:02:06.413 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:06.694 [Pipeline] sh 00:02:07.020 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:07.020 ++ readlink -f spdk_repo 00:02:07.020 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:07.020 + [[ -n /home/vagrant/spdk_repo ]] 00:02:07.020 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:07.020 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:07.020 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:07.020 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:07.020 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:07.020 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:07.020 + cd /home/vagrant/spdk_repo 00:02:07.020 + source /etc/os-release 00:02:07.020 ++ NAME='Fedora Linux' 00:02:07.020 ++ VERSION='39 (Cloud Edition)' 00:02:07.020 ++ ID=fedora 00:02:07.020 ++ VERSION_ID=39 00:02:07.020 ++ VERSION_CODENAME= 00:02:07.020 ++ PLATFORM_ID=platform:f39 00:02:07.020 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:07.020 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:07.020 ++ LOGO=fedora-logo-icon 00:02:07.020 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:07.020 ++ HOME_URL=https://fedoraproject.org/ 00:02:07.020 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:07.020 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:07.020 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:07.020 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:07.020 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:07.020 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:07.020 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:07.020 ++ SUPPORT_END=2024-11-12 00:02:07.020 ++ VARIANT='Cloud Edition' 00:02:07.020 ++ VARIANT_ID=cloud 00:02:07.020 + uname -a 00:02:07.020 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:07.020 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:07.606 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:07.866 Hugepages 00:02:07.866 node hugesize free / total 00:02:07.866 node0 1048576kB 0 / 0 00:02:07.866 node0 2048kB 0 / 0 00:02:07.866 00:02:07.866 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:07.866 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:07.866 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:07.866 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:07.866 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:07.866 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:07.866 + rm -f /tmp/spdk-ld-path 00:02:07.866 + source autorun-spdk.conf 00:02:07.866 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:07.866 ++ SPDK_TEST_NVME=1 00:02:07.866 ++ SPDK_TEST_FTL=1 00:02:07.866 ++ SPDK_TEST_ISAL=1 00:02:07.866 ++ SPDK_RUN_ASAN=1 00:02:07.866 ++ SPDK_RUN_UBSAN=1 00:02:07.866 ++ SPDK_TEST_XNVME=1 00:02:07.866 ++ SPDK_TEST_NVME_FDP=1 00:02:07.866 ++ SPDK_TEST_NATIVE_DPDK=main 00:02:07.866 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:07.866 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:07.866 ++ RUN_NIGHTLY=1 00:02:07.866 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:07.866 + [[ -n '' ]] 00:02:07.866 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:07.866 + for M in /var/spdk/build-*-manifest.txt 00:02:07.866 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:07.866 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:08.128 + for M in /var/spdk/build-*-manifest.txt 00:02:08.128 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:08.128 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:08.128 + for M in /var/spdk/build-*-manifest.txt 00:02:08.128 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:08.128 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:08.128 ++ uname 00:02:08.128 + [[ Linux == \L\i\n\u\x ]] 00:02:08.128 + sudo dmesg -T 00:02:08.128 + sudo dmesg --clear 00:02:08.128 + dmesg_pid=5772 00:02:08.128 + [[ Fedora Linux == FreeBSD ]] 00:02:08.128 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:08.128 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:08.128 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:08.128 + sudo dmesg -Tw 00:02:08.128 + [[ -x /usr/src/fio-static/fio ]] 00:02:08.128 + export FIO_BIN=/usr/src/fio-static/fio 00:02:08.128 + FIO_BIN=/usr/src/fio-static/fio 00:02:08.128 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:08.128 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:08.128 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:08.128 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:08.128 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:08.128 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:08.128 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:08.128 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:08.128 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:08.128 09:19:35 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:08.128 09:19:35 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:08.128 09:19:35 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:08.128 09:19:35 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:02:08.128 09:19:35 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:02:08.128 09:19:35 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:02:08.128 09:19:35 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:02:08.128 09:19:35 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:08.128 09:19:35 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:02:08.128 09:19:35 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:02:08.128 09:19:35 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=main 00:02:08.128 09:19:35 -- spdk_repo/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:08.128 09:19:35 -- spdk_repo/autorun-spdk.conf@11 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:08.128 09:19:35 -- spdk_repo/autorun-spdk.conf@12 -- $ RUN_NIGHTLY=1 00:02:08.128 09:19:35 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:08.128 09:19:35 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:08.128 09:19:35 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:08.128 09:19:35 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:08.128 09:19:35 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:08.128 09:19:35 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:08.128 09:19:35 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:08.128 09:19:35 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:08.128 09:19:35 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.128 09:19:35 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.128 09:19:35 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.128 09:19:35 -- paths/export.sh@5 -- $ export PATH 00:02:08.128 09:19:35 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.128 09:19:35 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:08.128 09:19:35 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:08.129 09:19:35 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732871975.XXXXXX 00:02:08.129 09:19:35 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732871975.B4YiEU 00:02:08.129 09:19:35 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:08.129 09:19:35 -- common/autobuild_common.sh@499 -- $ '[' -n main ']' 00:02:08.129 09:19:35 -- common/autobuild_common.sh@500 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:08.129 09:19:35 -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:08.129 09:19:35 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:08.129 09:19:35 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:08.129 09:19:35 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:08.129 09:19:35 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:08.129 09:19:35 -- common/autotest_common.sh@10 -- $ set +x 00:02:08.389 09:19:35 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:02:08.389 09:19:35 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:08.389 09:19:35 -- pm/common@17 -- $ local monitor 00:02:08.389 09:19:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:08.389 09:19:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:08.389 09:19:35 -- pm/common@25 -- $ sleep 1 00:02:08.389 09:19:35 -- pm/common@21 -- $ date +%s 00:02:08.389 09:19:35 -- pm/common@21 -- $ date +%s 00:02:08.389 09:19:35 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732871975 00:02:08.389 09:19:35 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732871975 00:02:08.389 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732871975_collect-cpu-load.pm.log 00:02:08.389 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732871975_collect-vmstat.pm.log 00:02:09.331 09:19:36 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:09.332 09:19:36 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:09.332 09:19:36 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:09.332 09:19:36 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:09.332 09:19:36 -- spdk/autobuild.sh@16 -- $ date -u 00:02:09.332 Fri Nov 29 09:19:36 AM UTC 2024 00:02:09.332 09:19:36 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:09.332 v25.01-pre-276-g35cd3e84d 00:02:09.332 09:19:36 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:09.332 09:19:36 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:09.332 09:19:36 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:09.332 09:19:36 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:09.332 09:19:36 -- common/autotest_common.sh@10 -- $ set +x 00:02:09.332 ************************************ 00:02:09.332 START TEST asan 00:02:09.332 ************************************ 00:02:09.332 using asan 00:02:09.332 09:19:36 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:09.332 00:02:09.332 real 0m0.000s 00:02:09.332 user 0m0.000s 00:02:09.332 sys 0m0.000s 00:02:09.332 09:19:36 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:09.332 ************************************ 00:02:09.332 END TEST asan 00:02:09.332 ************************************ 00:02:09.332 09:19:36 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:09.332 09:19:36 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:09.332 09:19:36 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:09.332 09:19:36 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:09.332 09:19:36 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:09.332 09:19:36 -- common/autotest_common.sh@10 -- $ set +x 00:02:09.332 ************************************ 00:02:09.332 START TEST ubsan 00:02:09.332 ************************************ 00:02:09.332 using ubsan 00:02:09.332 ************************************ 00:02:09.332 END TEST ubsan 00:02:09.332 ************************************ 00:02:09.332 09:19:36 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:09.332 00:02:09.332 real 0m0.000s 00:02:09.332 user 0m0.000s 00:02:09.332 sys 0m0.000s 00:02:09.332 09:19:36 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:09.332 09:19:36 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:09.332 09:19:37 -- spdk/autobuild.sh@27 -- $ '[' -n main ']' 00:02:09.332 09:19:37 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:09.332 09:19:37 -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:09.332 09:19:37 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:02:09.332 09:19:37 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:09.332 09:19:37 -- common/autotest_common.sh@10 -- $ set +x 00:02:09.332 ************************************ 00:02:09.332 START TEST build_native_dpdk 00:02:09.332 ************************************ 00:02:09.332 09:19:37 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:02:09.332 09:19:37 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:09.332 09:19:37 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:09.332 09:19:37 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:09.332 09:19:37 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:09.332 09:19:37 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:09.332 09:19:37 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:09.332 09:19:37 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:09.332 09:19:37 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:09.332 09:19:37 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:09.332 09:19:37 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:09.332 09:19:37 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:09.332 09:19:37 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:09.332 09:19:37 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:09.332 09:19:37 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:09.332 09:19:37 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:09.595 09:19:37 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:09.595 09:19:37 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:09.595 09:19:37 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:09.595 09:19:37 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:09.595 09:19:37 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:09.595 4843aacb0d doc: describe send scheduling counters in mlx5 guide 00:02:09.595 a4f455560f version: 24.11-rc4 00:02:09.595 0c81db5870 dts: remove leftover node methods 00:02:09.595 71eae7fe3e doc: correct definition of stats per queue feature 00:02:09.595 f2b1510f19 net/octeon_ep: replace use of word segregate 00:02:09.595 09:19:37 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:09.595 09:19:37 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:09.595 09:19:37 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=24.11.0-rc4 00:02:09.595 09:19:37 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:09.595 09:19:37 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:09.595 09:19:37 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:09.595 09:19:37 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:09.595 09:19:37 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:09.595 09:19:37 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:09.595 09:19:37 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm") 00:02:09.595 09:19:37 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n 00:02:09.595 09:19:37 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:09.595 09:19:37 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:09.595 09:19:37 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]] 00:02:09.595 09:19:37 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:09.595 09:19:37 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s 00:02:09.595 09:19:37 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']' 00:02:09.595 09:19:37 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 24.11.0-rc4 21.11.0 00:02:09.595 09:19:37 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 24.11.0-rc4 '<' 21.11.0 00:02:09.595 09:19:37 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:09.595 09:19:37 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:09.595 09:19:37 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:09.595 09:19:37 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:09.595 09:19:37 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:09.595 09:19:37 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:09.595 09:19:37 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:09.595 09:19:37 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:02:09.595 09:19:37 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:09.595 09:19:37 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:09.595 09:19:37 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:09.595 09:19:37 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:09.595 09:19:37 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:09.595 09:19:37 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:09.595 09:19:37 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:02:09.595 09:19:37 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:09.595 09:19:37 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:09.595 09:19:37 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:09.595 09:19:37 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:02:09.595 09:19:37 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:09.595 09:19:37 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:09.595 09:19:37 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:09.595 09:19:37 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:09.595 09:19:37 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:09.595 09:19:37 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:09.595 09:19:37 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:09.595 09:19:37 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:02:09.595 patching file config/rte_config.h 00:02:09.595 Hunk #1 succeeded at 72 (offset 13 lines). 00:02:09.595 09:19:37 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 24.11.0-rc4 24.07.0 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 24.11.0-rc4 '<' 24.07.0 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@364 -- $ (( v++ )) 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 11 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@353 -- $ local d=11 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 11 =~ ^[0-9]+$ ]] 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@355 -- $ echo 11 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=11 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 07 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@353 -- $ local d=07 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@355 -- $ echo 7 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=7 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:09.596 09:19:37 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 24.11.0-rc4 24.07.0 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 24.11.0-rc4 '>=' 24.07.0 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@364 -- $ (( v++ )) 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 11 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@353 -- $ local d=11 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 11 =~ ^[0-9]+$ ]] 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@355 -- $ echo 11 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=11 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 07 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@353 -- $ local d=07 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@355 -- $ echo 7 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=7 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:09.596 09:19:37 build_native_dpdk -- scripts/common.sh@367 -- $ return 0 00:02:09.596 09:19:37 build_native_dpdk -- common/autobuild_common.sh@187 -- $ patch -p1 00:02:09.596 patching file drivers/bus/pci/linux/pci_uio.c 00:02:09.596 09:19:37 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false 00:02:09.596 09:19:37 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:02:09.596 09:19:37 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']' 00:02:09.596 09:19:37 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm 00:02:09.596 09:19:37 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:14.885 The Meson build system 00:02:14.885 Version: 1.5.0 00:02:14.885 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:14.885 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:14.885 Build type: native build 00:02:14.885 Project name: DPDK 00:02:14.885 Project version: 24.11.0-rc4 00:02:14.885 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:14.885 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:14.885 Host machine cpu family: x86_64 00:02:14.885 Host machine cpu: x86_64 00:02:14.885 Message: ## Building in Developer Mode ## 00:02:14.885 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:14.885 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:14.885 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:14.885 Program python3 (elftools) found: YES (/usr/bin/python3) modules: elftools 00:02:14.885 Program cat found: YES (/usr/bin/cat) 00:02:14.885 config/meson.build:122: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:14.885 Compiler for C supports arguments -march=native: YES 00:02:14.885 Checking for size of "void *" : 8 00:02:14.885 Checking for size of "void *" : 8 (cached) 00:02:14.885 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:14.885 Library m found: YES 00:02:14.885 Library numa found: YES 00:02:14.885 Has header "numaif.h" : YES 00:02:14.885 Library fdt found: NO 00:02:14.885 Library execinfo found: NO 00:02:14.886 Has header "execinfo.h" : YES 00:02:14.886 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:14.886 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:14.886 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:14.886 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:14.886 Run-time dependency openssl found: YES 3.1.1 00:02:14.886 Run-time dependency libpcap found: YES 1.10.4 00:02:14.886 Has header "pcap.h" with dependency libpcap: YES 00:02:14.886 Compiler for C supports arguments -Wcast-qual: YES 00:02:14.886 Compiler for C supports arguments -Wdeprecated: YES 00:02:14.886 Compiler for C supports arguments -Wformat: YES 00:02:14.886 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:14.886 Compiler for C supports arguments -Wformat-security: NO 00:02:14.886 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:14.886 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:14.886 Compiler for C supports arguments -Wnested-externs: YES 00:02:14.886 Compiler for C supports arguments -Wold-style-definition: YES 00:02:14.886 Compiler for C supports arguments -Wpointer-arith: YES 00:02:14.886 Compiler for C supports arguments -Wsign-compare: YES 00:02:14.886 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:14.886 Compiler for C supports arguments -Wundef: YES 00:02:14.886 Compiler for C supports arguments -Wwrite-strings: YES 00:02:14.886 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:14.886 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:14.886 Program objdump found: YES (/usr/bin/objdump) 00:02:14.886 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512dq -mavx512bw: YES 00:02:14.886 Checking if "AVX512 checking" compiles: YES 00:02:14.886 Fetching value of define "__AVX512F__" : 1 00:02:14.886 Fetching value of define "__AVX512BW__" : 1 00:02:14.886 Fetching value of define "__AVX512DQ__" : 1 00:02:14.886 Fetching value of define "__AVX512VL__" : 1 00:02:14.886 Fetching value of define "__SSE4_2__" : 1 00:02:14.886 Fetching value of define "__AES__" : 1 00:02:14.886 Fetching value of define "__AVX__" : 1 00:02:14.886 Fetching value of define "__AVX2__" : 1 00:02:14.886 Fetching value of define "__AVX512BW__" : 1 00:02:14.886 Fetching value of define "__AVX512CD__" : 1 00:02:14.886 Fetching value of define "__AVX512DQ__" : 1 00:02:14.886 Fetching value of define "__AVX512F__" : 1 00:02:14.886 Fetching value of define "__AVX512VL__" : 1 00:02:14.886 Fetching value of define "__PCLMUL__" : 1 00:02:14.886 Fetching value of define "__RDRND__" : 1 00:02:14.886 Fetching value of define "__RDSEED__" : 1 00:02:14.886 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:14.886 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:14.886 Message: lib/log: Defining dependency "log" 00:02:14.886 Message: lib/kvargs: Defining dependency "kvargs" 00:02:14.886 Message: lib/argparse: Defining dependency "argparse" 00:02:14.886 Message: lib/telemetry: Defining dependency "telemetry" 00:02:14.886 Checking for function "pthread_attr_setaffinity_np" : YES 00:02:14.886 Checking for function "getentropy" : NO 00:02:14.886 Message: lib/eal: Defining dependency "eal" 00:02:14.886 Message: lib/ptr_compress: Defining dependency "ptr_compress" 00:02:14.886 Message: lib/ring: Defining dependency "ring" 00:02:14.886 Message: lib/rcu: Defining dependency "rcu" 00:02:14.886 Message: lib/mempool: Defining dependency "mempool" 00:02:14.886 Message: lib/mbuf: Defining dependency "mbuf" 00:02:14.886 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:14.886 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:14.886 Compiler for C supports arguments -mpclmul: YES 00:02:14.886 Compiler for C supports arguments -maes: YES 00:02:14.886 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:14.886 Message: lib/net: Defining dependency "net" 00:02:14.886 Message: lib/meter: Defining dependency "meter" 00:02:14.886 Message: lib/ethdev: Defining dependency "ethdev" 00:02:14.886 Message: lib/pci: Defining dependency "pci" 00:02:14.886 Message: lib/cmdline: Defining dependency "cmdline" 00:02:14.886 Message: lib/metrics: Defining dependency "metrics" 00:02:14.886 Message: lib/hash: Defining dependency "hash" 00:02:14.886 Message: lib/timer: Defining dependency "timer" 00:02:14.886 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:14.886 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:14.886 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:14.886 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:14.886 Message: lib/acl: Defining dependency "acl" 00:02:14.886 Message: lib/bbdev: Defining dependency "bbdev" 00:02:14.886 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:14.886 Run-time dependency libelf found: YES 0.191 00:02:14.886 Message: lib/bpf: Defining dependency "bpf" 00:02:14.886 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:14.886 Message: lib/compressdev: Defining dependency "compressdev" 00:02:14.886 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:14.886 Message: lib/distributor: Defining dependency "distributor" 00:02:14.886 Message: lib/dmadev: Defining dependency "dmadev" 00:02:14.886 Message: lib/efd: Defining dependency "efd" 00:02:14.886 Message: lib/eventdev: Defining dependency "eventdev" 00:02:14.886 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:14.886 Message: lib/gpudev: Defining dependency "gpudev" 00:02:14.886 Message: lib/gro: Defining dependency "gro" 00:02:14.886 Message: lib/gso: Defining dependency "gso" 00:02:14.886 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:14.886 Message: lib/jobstats: Defining dependency "jobstats" 00:02:14.886 Message: lib/latencystats: Defining dependency "latencystats" 00:02:14.886 Message: lib/lpm: Defining dependency "lpm" 00:02:14.886 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:14.886 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:14.886 Fetching value of define "__AVX512IFMA__" : 1 00:02:14.886 Message: lib/member: Defining dependency "member" 00:02:14.886 Message: lib/pcapng: Defining dependency "pcapng" 00:02:14.886 Message: lib/power: Defining dependency "power" 00:02:14.886 Message: lib/rawdev: Defining dependency "rawdev" 00:02:14.886 Message: lib/regexdev: Defining dependency "regexdev" 00:02:14.886 Message: lib/mldev: Defining dependency "mldev" 00:02:14.886 Message: lib/rib: Defining dependency "rib" 00:02:14.886 Message: lib/reorder: Defining dependency "reorder" 00:02:14.886 Message: lib/sched: Defining dependency "sched" 00:02:14.886 Message: lib/security: Defining dependency "security" 00:02:14.886 Message: lib/stack: Defining dependency "stack" 00:02:14.886 Has header "linux/userfaultfd.h" : YES 00:02:14.886 Has header "linux/vduse.h" : YES 00:02:14.886 Message: lib/vhost: Defining dependency "vhost" 00:02:14.886 Message: lib/ipsec: Defining dependency "ipsec" 00:02:14.886 Message: lib/pdcp: Defining dependency "pdcp" 00:02:14.886 Message: lib/fib: Defining dependency "fib" 00:02:14.886 Message: lib/port: Defining dependency "port" 00:02:14.886 Message: lib/pdump: Defining dependency "pdump" 00:02:14.886 Message: lib/table: Defining dependency "table" 00:02:14.886 Message: lib/pipeline: Defining dependency "pipeline" 00:02:14.886 Message: lib/graph: Defining dependency "graph" 00:02:14.886 Message: lib/node: Defining dependency "node" 00:02:14.886 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:14.886 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:14.886 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:14.886 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:14.886 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:14.886 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:14.886 Compiler for C supports arguments -Wno-unused-value: YES 00:02:14.886 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:14.886 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:14.886 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:14.886 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:14.887 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:14.887 Message: drivers/power/acpi: Defining dependency "power_acpi" 00:02:14.887 Message: drivers/power/amd_pstate: Defining dependency "power_amd_pstate" 00:02:14.887 Message: drivers/power/cppc: Defining dependency "power_cppc" 00:02:14.887 Message: drivers/power/intel_pstate: Defining dependency "power_intel_pstate" 00:02:14.887 Message: drivers/power/intel_uncore: Defining dependency "power_intel_uncore" 00:02:14.887 Message: drivers/power/kvm_vm: Defining dependency "power_kvm_vm" 00:02:14.887 Has header "sys/epoll.h" : YES 00:02:14.887 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:14.887 Configuring doxy-api-html.conf using configuration 00:02:14.887 Configuring doxy-api-man.conf using configuration 00:02:14.887 Program mandb found: YES (/usr/bin/mandb) 00:02:14.887 Program sphinx-build found: NO 00:02:14.887 Program sphinx-build found: NO 00:02:14.887 Configuring rte_build_config.h using configuration 00:02:14.887 Message: 00:02:14.887 ================= 00:02:14.887 Applications Enabled 00:02:14.887 ================= 00:02:14.887 00:02:14.887 apps: 00:02:14.887 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:14.887 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:14.887 test-pmd, test-regex, test-sad, test-security-perf, 00:02:14.887 00:02:14.887 Message: 00:02:14.887 ================= 00:02:14.887 Libraries Enabled 00:02:14.887 ================= 00:02:14.887 00:02:14.887 libs: 00:02:14.887 log, kvargs, argparse, telemetry, eal, ptr_compress, ring, rcu, 00:02:14.887 mempool, mbuf, net, meter, ethdev, pci, cmdline, metrics, 00:02:14.887 hash, timer, acl, bbdev, bitratestats, bpf, cfgfile, compressdev, 00:02:14.887 cryptodev, distributor, dmadev, efd, eventdev, dispatcher, gpudev, gro, 00:02:14.887 gso, ip_frag, jobstats, latencystats, lpm, member, pcapng, power, 00:02:14.887 rawdev, regexdev, mldev, rib, reorder, sched, security, stack, 00:02:14.887 vhost, ipsec, pdcp, fib, port, pdump, table, pipeline, 00:02:14.887 graph, node, 00:02:14.887 00:02:14.887 Message: 00:02:14.887 =============== 00:02:14.887 Drivers Enabled 00:02:14.887 =============== 00:02:14.887 00:02:14.887 common: 00:02:14.887 00:02:14.887 bus: 00:02:14.887 pci, vdev, 00:02:14.887 mempool: 00:02:14.887 ring, 00:02:14.887 dma: 00:02:14.887 00:02:14.887 net: 00:02:14.887 i40e, 00:02:14.887 raw: 00:02:14.887 00:02:14.887 crypto: 00:02:14.887 00:02:14.887 compress: 00:02:14.887 00:02:14.887 regex: 00:02:14.887 00:02:14.887 ml: 00:02:14.887 00:02:14.887 vdpa: 00:02:14.887 00:02:14.887 event: 00:02:14.887 00:02:14.887 baseband: 00:02:14.887 00:02:14.887 gpu: 00:02:14.887 00:02:14.887 power: 00:02:14.887 acpi, amd_pstate, cppc, intel_pstate, intel_uncore, kvm_vm, 00:02:14.887 00:02:14.887 Message: 00:02:14.887 ================= 00:02:14.887 Content Skipped 00:02:14.887 ================= 00:02:14.887 00:02:14.887 apps: 00:02:14.887 00:02:14.887 libs: 00:02:14.887 00:02:14.887 drivers: 00:02:14.887 common/cpt: not in enabled drivers build config 00:02:14.887 common/dpaax: not in enabled drivers build config 00:02:14.887 common/iavf: not in enabled drivers build config 00:02:14.887 common/idpf: not in enabled drivers build config 00:02:14.887 common/ionic: not in enabled drivers build config 00:02:14.887 common/mvep: not in enabled drivers build config 00:02:14.887 common/octeontx: not in enabled drivers build config 00:02:14.887 bus/auxiliary: not in enabled drivers build config 00:02:14.887 bus/cdx: not in enabled drivers build config 00:02:14.887 bus/dpaa: not in enabled drivers build config 00:02:14.887 bus/fslmc: not in enabled drivers build config 00:02:14.887 bus/ifpga: not in enabled drivers build config 00:02:14.887 bus/platform: not in enabled drivers build config 00:02:14.887 bus/uacce: not in enabled drivers build config 00:02:14.887 bus/vmbus: not in enabled drivers build config 00:02:14.887 common/cnxk: not in enabled drivers build config 00:02:14.887 common/mlx5: not in enabled drivers build config 00:02:14.887 common/nfp: not in enabled drivers build config 00:02:14.887 common/nitrox: not in enabled drivers build config 00:02:14.887 common/qat: not in enabled drivers build config 00:02:14.887 common/sfc_efx: not in enabled drivers build config 00:02:14.887 mempool/bucket: not in enabled drivers build config 00:02:14.887 mempool/cnxk: not in enabled drivers build config 00:02:14.887 mempool/dpaa: not in enabled drivers build config 00:02:14.887 mempool/dpaa2: not in enabled drivers build config 00:02:14.887 mempool/octeontx: not in enabled drivers build config 00:02:14.887 mempool/stack: not in enabled drivers build config 00:02:14.887 dma/cnxk: not in enabled drivers build config 00:02:14.887 dma/dpaa: not in enabled drivers build config 00:02:14.887 dma/dpaa2: not in enabled drivers build config 00:02:14.887 dma/hisilicon: not in enabled drivers build config 00:02:14.887 dma/idxd: not in enabled drivers build config 00:02:14.887 dma/ioat: not in enabled drivers build config 00:02:14.887 dma/odm: not in enabled drivers build config 00:02:14.887 dma/skeleton: not in enabled drivers build config 00:02:14.887 net/af_packet: not in enabled drivers build config 00:02:14.887 net/af_xdp: not in enabled drivers build config 00:02:14.887 net/ark: not in enabled drivers build config 00:02:14.887 net/atlantic: not in enabled drivers build config 00:02:14.887 net/avp: not in enabled drivers build config 00:02:14.887 net/axgbe: not in enabled drivers build config 00:02:14.887 net/bnx2x: not in enabled drivers build config 00:02:14.887 net/bnxt: not in enabled drivers build config 00:02:14.887 net/bonding: not in enabled drivers build config 00:02:14.887 net/cnxk: not in enabled drivers build config 00:02:14.887 net/cpfl: not in enabled drivers build config 00:02:14.887 net/cxgbe: not in enabled drivers build config 00:02:14.887 net/dpaa: not in enabled drivers build config 00:02:14.887 net/dpaa2: not in enabled drivers build config 00:02:14.887 net/e1000: not in enabled drivers build config 00:02:14.887 net/ena: not in enabled drivers build config 00:02:14.887 net/enetc: not in enabled drivers build config 00:02:14.887 net/enetfec: not in enabled drivers build config 00:02:14.887 net/enic: not in enabled drivers build config 00:02:14.887 net/failsafe: not in enabled drivers build config 00:02:14.887 net/fm10k: not in enabled drivers build config 00:02:14.887 net/gve: not in enabled drivers build config 00:02:14.887 net/hinic: not in enabled drivers build config 00:02:14.887 net/hns3: not in enabled drivers build config 00:02:14.887 net/iavf: not in enabled drivers build config 00:02:14.887 net/ice: not in enabled drivers build config 00:02:14.887 net/idpf: not in enabled drivers build config 00:02:14.887 net/igc: not in enabled drivers build config 00:02:14.887 net/ionic: not in enabled drivers build config 00:02:14.887 net/ipn3ke: not in enabled drivers build config 00:02:14.887 net/ixgbe: not in enabled drivers build config 00:02:14.887 net/mana: not in enabled drivers build config 00:02:14.887 net/memif: not in enabled drivers build config 00:02:14.887 net/mlx4: not in enabled drivers build config 00:02:14.887 net/mlx5: not in enabled drivers build config 00:02:14.887 net/mvneta: not in enabled drivers build config 00:02:14.887 net/mvpp2: not in enabled drivers build config 00:02:14.887 net/netvsc: not in enabled drivers build config 00:02:14.887 net/nfb: not in enabled drivers build config 00:02:14.887 net/nfp: not in enabled drivers build config 00:02:14.887 net/ngbe: not in enabled drivers build config 00:02:14.887 net/ntnic: not in enabled drivers build config 00:02:14.887 net/null: not in enabled drivers build config 00:02:14.887 net/octeontx: not in enabled drivers build config 00:02:14.887 net/octeon_ep: not in enabled drivers build config 00:02:14.887 net/pcap: not in enabled drivers build config 00:02:14.887 net/pfe: not in enabled drivers build config 00:02:14.888 net/qede: not in enabled drivers build config 00:02:14.888 net/r8169: not in enabled drivers build config 00:02:14.888 net/ring: not in enabled drivers build config 00:02:14.888 net/sfc: not in enabled drivers build config 00:02:14.888 net/softnic: not in enabled drivers build config 00:02:14.888 net/tap: not in enabled drivers build config 00:02:14.888 net/thunderx: not in enabled drivers build config 00:02:14.888 net/txgbe: not in enabled drivers build config 00:02:14.888 net/vdev_netvsc: not in enabled drivers build config 00:02:14.888 net/vhost: not in enabled drivers build config 00:02:14.888 net/virtio: not in enabled drivers build config 00:02:14.888 net/vmxnet3: not in enabled drivers build config 00:02:14.888 net/zxdh: not in enabled drivers build config 00:02:14.888 raw/cnxk_bphy: not in enabled drivers build config 00:02:14.888 raw/cnxk_gpio: not in enabled drivers build config 00:02:14.888 raw/cnxk_rvu_lf: not in enabled drivers build config 00:02:14.888 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:14.888 raw/gdtc: not in enabled drivers build config 00:02:14.888 raw/ifpga: not in enabled drivers build config 00:02:14.888 raw/ntb: not in enabled drivers build config 00:02:14.888 raw/skeleton: not in enabled drivers build config 00:02:14.888 crypto/armv8: not in enabled drivers build config 00:02:14.888 crypto/bcmfs: not in enabled drivers build config 00:02:14.888 crypto/caam_jr: not in enabled drivers build config 00:02:14.888 crypto/ccp: not in enabled drivers build config 00:02:14.888 crypto/cnxk: not in enabled drivers build config 00:02:14.888 crypto/dpaa_sec: not in enabled drivers build config 00:02:14.888 crypto/dpaa2_sec: not in enabled drivers build config 00:02:14.888 crypto/ionic: not in enabled drivers build config 00:02:14.888 crypto/ipsec_mb: not in enabled drivers build config 00:02:14.888 crypto/mlx5: not in enabled drivers build config 00:02:14.888 crypto/mvsam: not in enabled drivers build config 00:02:14.888 crypto/nitrox: not in enabled drivers build config 00:02:14.888 crypto/null: not in enabled drivers build config 00:02:14.888 crypto/octeontx: not in enabled drivers build config 00:02:14.888 crypto/openssl: not in enabled drivers build config 00:02:14.888 crypto/scheduler: not in enabled drivers build config 00:02:14.888 crypto/uadk: not in enabled drivers build config 00:02:14.888 crypto/virtio: not in enabled drivers build config 00:02:14.888 compress/isal: not in enabled drivers build config 00:02:14.888 compress/mlx5: not in enabled drivers build config 00:02:14.888 compress/nitrox: not in enabled drivers build config 00:02:14.888 compress/octeontx: not in enabled drivers build config 00:02:14.888 compress/uadk: not in enabled drivers build config 00:02:14.888 compress/zlib: not in enabled drivers build config 00:02:14.888 regex/mlx5: not in enabled drivers build config 00:02:14.888 regex/cn9k: not in enabled drivers build config 00:02:14.888 ml/cnxk: not in enabled drivers build config 00:02:14.888 vdpa/ifc: not in enabled drivers build config 00:02:14.888 vdpa/mlx5: not in enabled drivers build config 00:02:14.888 vdpa/nfp: not in enabled drivers build config 00:02:14.888 vdpa/sfc: not in enabled drivers build config 00:02:14.888 event/cnxk: not in enabled drivers build config 00:02:14.888 event/dlb2: not in enabled drivers build config 00:02:14.888 event/dpaa: not in enabled drivers build config 00:02:14.888 event/dpaa2: not in enabled drivers build config 00:02:14.888 event/dsw: not in enabled drivers build config 00:02:14.888 event/opdl: not in enabled drivers build config 00:02:14.888 event/skeleton: not in enabled drivers build config 00:02:14.888 event/sw: not in enabled drivers build config 00:02:14.888 event/octeontx: not in enabled drivers build config 00:02:14.888 baseband/acc: not in enabled drivers build config 00:02:14.888 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:14.888 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:14.888 baseband/la12xx: not in enabled drivers build config 00:02:14.888 baseband/null: not in enabled drivers build config 00:02:14.888 baseband/turbo_sw: not in enabled drivers build config 00:02:14.888 gpu/cuda: not in enabled drivers build config 00:02:14.888 power/amd_uncore: not in enabled drivers build config 00:02:14.888 00:02:14.888 00:02:14.888 Message: DPDK build config complete: 00:02:14.888 source path = "/home/vagrant/spdk_repo/dpdk" 00:02:14.888 build path = "/home/vagrant/spdk_repo/dpdk/build-tmp" 00:02:14.888 Build targets in project: 244 00:02:14.888 00:02:14.888 DPDK 24.11.0-rc4 00:02:14.888 00:02:14.888 User defined options 00:02:14.888 libdir : lib 00:02:14.888 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:14.888 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:14.888 c_link_args : 00:02:14.888 enable_docs : false 00:02:14.888 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:14.888 enable_kmods : false 00:02:15.828 machine : native 00:02:15.828 tests : false 00:02:15.828 00:02:15.828 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:15.828 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:15.828 09:19:43 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:15.828 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:16.089 [1/764] Compiling C object lib/librte_log.a.p/log_log_syslog.c.o 00:02:16.089 [2/764] Compiling C object lib/librte_log.a.p/log_log_journal.c.o 00:02:16.089 [3/764] Compiling C object lib/librte_log.a.p/log_log_color.c.o 00:02:16.089 [4/764] Compiling C object lib/librte_log.a.p/log_log_timestamp.c.o 00:02:16.089 [5/764] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:16.089 [6/764] Linking static target lib/librte_kvargs.a 00:02:16.089 [7/764] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:16.089 [8/764] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:16.089 [9/764] Linking static target lib/librte_log.a 00:02:16.351 [10/764] Compiling C object lib/librte_argparse.a.p/argparse_rte_argparse.c.o 00:02:16.351 [11/764] Linking static target lib/librte_argparse.a 00:02:16.351 [12/764] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.351 [13/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:16.351 [14/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:16.351 [15/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:16.351 [16/764] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:16.351 [17/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:16.351 [18/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:16.351 [19/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:16.613 [20/764] Generating lib/argparse.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.613 [21/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:16.613 [22/764] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.613 [23/764] Linking target lib/librte_log.so.25.0 00:02:16.875 [24/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:16.875 [25/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:16.875 [26/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore_var.c.o 00:02:16.875 [27/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:16.875 [28/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:16.875 [29/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:16.875 [30/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:17.137 [31/764] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:17.137 [32/764] Linking static target lib/librte_telemetry.a 00:02:17.137 [33/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:17.137 [34/764] Generating symbol file lib/librte_log.so.25.0.p/librte_log.so.25.0.symbols 00:02:17.137 [35/764] Linking target lib/librte_kvargs.so.25.0 00:02:17.137 [36/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:17.137 [37/764] Linking target lib/librte_argparse.so.25.0 00:02:17.137 [38/764] Generating symbol file lib/librte_kvargs.so.25.0.p/librte_kvargs.so.25.0.symbols 00:02:17.137 [39/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:17.137 [40/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:17.398 [41/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:17.399 [42/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:17.399 [43/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:17.399 [44/764] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.399 [45/764] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:17.399 [46/764] Linking target lib/librte_telemetry.so.25.0 00:02:17.399 [47/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:17.399 [48/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:17.661 [49/764] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:17.661 [50/764] Compiling C object lib/librte_eal.a.p/eal_common_rte_bitset.c.o 00:02:17.661 [51/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:17.661 [52/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:17.661 [53/764] Generating symbol file lib/librte_telemetry.so.25.0.p/librte_telemetry.so.25.0.symbols 00:02:17.661 [54/764] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:17.661 [55/764] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:17.921 [56/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:17.921 [57/764] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:17.921 [58/764] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:17.921 [59/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:17.921 [60/764] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:17.921 [61/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:18.181 [62/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:18.181 [63/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:18.181 [64/764] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:18.181 [65/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:18.181 [66/764] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:18.181 [67/764] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:18.442 [68/764] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:18.442 [69/764] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:18.442 [70/764] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:18.442 [71/764] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:18.442 [72/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:18.442 [73/764] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:18.442 [74/764] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:18.442 [75/764] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:18.702 [76/764] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:18.702 [77/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:18.702 [78/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:18.702 [79/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:18.702 [80/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:18.702 [81/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:18.702 [82/764] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:18.702 [83/764] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:18.961 [84/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:18.961 [85/764] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:18.961 [86/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:18.961 [87/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:18.961 [88/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:18.961 [89/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:18.961 [90/764] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:18.961 [91/764] Compiling C object lib/librte_eal.a.p/eal_x86_rte_mmu.c.o 00:02:19.219 [92/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:19.220 [93/764] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:19.220 [94/764] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:19.220 [95/764] Linking static target lib/librte_ring.a 00:02:19.220 [96/764] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:19.220 [97/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:19.478 [98/764] Linking static target lib/librte_eal.a 00:02:19.478 [99/764] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:19.478 [100/764] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:19.478 [101/764] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.478 [102/764] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:19.478 [103/764] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:19.737 [104/764] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:19.737 [105/764] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:19.737 [106/764] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:19.737 [107/764] Linking static target lib/librte_mempool.a 00:02:19.737 [108/764] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:19.737 [109/764] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:19.737 [110/764] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:19.996 [111/764] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:19.996 [112/764] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:19.996 [113/764] Linking static target lib/librte_rcu.a 00:02:19.996 [114/764] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:19.996 [115/764] Linking static target lib/librte_net.a 00:02:19.996 [116/764] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:19.996 [117/764] Linking static target lib/librte_meter.a 00:02:19.996 [118/764] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:19.996 [119/764] Linking static target lib/librte_mbuf.a 00:02:20.254 [120/764] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.254 [121/764] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:20.254 [122/764] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:20.254 [123/764] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.254 [124/764] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.254 [125/764] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:20.254 [126/764] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.513 [127/764] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.513 [128/764] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:20.513 [129/764] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:20.771 [130/764] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:20.771 [131/764] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:20.771 [132/764] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:21.029 [133/764] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:21.029 [134/764] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:21.029 [135/764] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:21.029 [136/764] Linking static target lib/librte_pci.a 00:02:21.029 [137/764] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:21.029 [138/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:21.029 [139/764] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.029 [140/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:21.029 [141/764] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:21.310 [142/764] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:21.310 [143/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:21.310 [144/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:21.310 [145/764] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:21.310 [146/764] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:21.310 [147/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:21.310 [148/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:21.310 [149/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:21.310 [150/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:21.310 [151/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:21.310 [152/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:21.310 [153/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:21.310 [154/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:21.594 [155/764] Linking static target lib/librte_cmdline.a 00:02:21.594 [156/764] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:21.594 [157/764] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:21.594 [158/764] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:21.594 [159/764] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:21.594 [160/764] Linking static target lib/librte_metrics.a 00:02:21.853 [161/764] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:21.853 [162/764] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:21.853 [163/764] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.853 [164/764] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.111 [165/764] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:22.111 [166/764] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gf2_poly_math.c.o 00:02:22.369 [167/764] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:22.369 [168/764] Linking static target lib/librte_timer.a 00:02:22.369 [169/764] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:22.369 [170/764] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:22.369 [171/764] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:22.369 [172/764] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:22.369 [173/764] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.627 [174/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:22.884 [175/764] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:22.884 [176/764] Linking static target lib/librte_bitratestats.a 00:02:22.884 [177/764] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.884 [178/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:23.141 [179/764] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:23.141 [180/764] Linking static target lib/librte_bbdev.a 00:02:23.141 [181/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:23.399 [182/764] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:23.399 [183/764] Linking static target lib/librte_ethdev.a 00:02:23.399 [184/764] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:23.399 [185/764] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.399 [186/764] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:23.399 [187/764] Linking static target lib/acl/libavx2_tmp.a 00:02:23.399 [188/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:23.399 [189/764] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:23.399 [190/764] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:23.656 [191/764] Linking static target lib/librte_hash.a 00:02:23.656 [192/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:23.656 [193/764] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.656 [194/764] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:23.656 [195/764] Linking target lib/librte_eal.so.25.0 00:02:23.656 [196/764] Linking static target lib/librte_cfgfile.a 00:02:23.914 [197/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:23.914 [198/764] Generating symbol file lib/librte_eal.so.25.0.p/librte_eal.so.25.0.symbols 00:02:23.914 [199/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:23.914 [200/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:23.914 [201/764] Linking target lib/librte_ring.so.25.0 00:02:23.914 [202/764] Linking target lib/librte_meter.so.25.0 00:02:23.914 [203/764] Linking target lib/librte_pci.so.25.0 00:02:23.914 [204/764] Generating symbol file lib/librte_ring.so.25.0.p/librte_ring.so.25.0.symbols 00:02:23.914 [205/764] Generating symbol file lib/librte_pci.so.25.0.p/librte_pci.so.25.0.symbols 00:02:23.914 [206/764] Generating symbol file lib/librte_meter.so.25.0.p/librte_meter.so.25.0.symbols 00:02:23.914 [207/764] Linking target lib/librte_rcu.so.25.0 00:02:23.914 [208/764] Linking target lib/librte_mempool.so.25.0 00:02:24.173 [209/764] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.173 [210/764] Linking target lib/librte_timer.so.25.0 00:02:24.173 [211/764] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.173 [212/764] Linking target lib/librte_cfgfile.so.25.0 00:02:24.173 [213/764] Generating symbol file lib/librte_mempool.so.25.0.p/librte_mempool.so.25.0.symbols 00:02:24.173 [214/764] Generating symbol file lib/librte_timer.so.25.0.p/librte_timer.so.25.0.symbols 00:02:24.173 [215/764] Generating symbol file lib/librte_rcu.so.25.0.p/librte_rcu.so.25.0.symbols 00:02:24.173 [216/764] Linking target lib/librte_mbuf.so.25.0 00:02:24.173 [217/764] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:24.173 [218/764] Linking static target lib/librte_acl.a 00:02:24.173 [219/764] Generating symbol file lib/librte_mbuf.so.25.0.p/librte_mbuf.so.25.0.symbols 00:02:24.173 [220/764] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:24.173 [221/764] Linking target lib/librte_net.so.25.0 00:02:24.173 [222/764] Linking target lib/librte_bbdev.so.25.0 00:02:24.432 [223/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:24.432 [224/764] Generating symbol file lib/librte_net.so.25.0.p/librte_net.so.25.0.symbols 00:02:24.432 [225/764] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:24.432 [226/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:24.432 [227/764] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:24.432 [228/764] Linking static target lib/librte_bpf.a 00:02:24.432 [229/764] Linking target lib/librte_cmdline.so.25.0 00:02:24.432 [230/764] Linking target lib/librte_hash.so.25.0 00:02:24.432 [231/764] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.432 [232/764] Linking target lib/librte_acl.so.25.0 00:02:24.432 [233/764] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:24.432 [234/764] Generating symbol file lib/librte_hash.so.25.0.p/librte_hash.so.25.0.symbols 00:02:24.432 [235/764] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:24.691 [236/764] Linking static target lib/librte_compressdev.a 00:02:24.691 [237/764] Generating symbol file lib/librte_acl.so.25.0.p/librte_acl.so.25.0.symbols 00:02:24.691 [238/764] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.691 [239/764] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:24.975 [240/764] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:24.975 [241/764] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:24.975 [242/764] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:24.975 [243/764] Linking static target lib/librte_distributor.a 00:02:24.975 [244/764] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.975 [245/764] Linking target lib/librte_compressdev.so.25.0 00:02:24.975 [246/764] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:24.975 [247/764] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:24.975 [248/764] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.975 [249/764] Linking static target lib/librte_dmadev.a 00:02:25.233 [250/764] Linking target lib/librte_distributor.so.25.0 00:02:25.233 [251/764] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:25.491 [252/764] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.491 [253/764] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:25.491 [254/764] Linking target lib/librte_dmadev.so.25.0 00:02:25.491 [255/764] Generating symbol file lib/librte_dmadev.so.25.0.p/librte_dmadev.so.25.0.symbols 00:02:25.491 [256/764] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:25.491 [257/764] Linking static target lib/librte_efd.a 00:02:25.749 [258/764] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:25.749 [259/764] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:25.749 [260/764] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.749 [261/764] Linking target lib/librte_efd.so.25.0 00:02:25.749 [262/764] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:26.016 [263/764] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:26.016 [264/764] Linking static target lib/librte_dispatcher.a 00:02:26.016 [265/764] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:26.016 [266/764] Linking static target lib/librte_cryptodev.a 00:02:26.016 [267/764] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:26.275 [268/764] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:26.275 [269/764] Linking static target lib/librte_gpudev.a 00:02:26.275 [270/764] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:26.275 [271/764] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.275 [272/764] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:26.275 [273/764] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:26.533 [274/764] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:26.533 [275/764] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:26.533 [276/764] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:26.533 [277/764] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:26.791 [278/764] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.791 [279/764] Linking target lib/librte_gpudev.so.25.0 00:02:26.791 [280/764] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:26.791 [281/764] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:26.791 [282/764] Linking static target lib/librte_gro.a 00:02:26.791 [283/764] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:26.791 [284/764] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:26.791 [285/764] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:27.049 [286/764] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:27.050 [287/764] Linking static target lib/librte_gso.a 00:02:27.050 [288/764] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.050 [289/764] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.050 [290/764] Linking target lib/librte_cryptodev.so.25.0 00:02:27.050 [291/764] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.050 [292/764] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.050 [293/764] Generating symbol file lib/librte_cryptodev.so.25.0.p/librte_cryptodev.so.25.0.symbols 00:02:27.308 [294/764] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:27.308 [295/764] Linking target lib/librte_ethdev.so.25.0 00:02:27.308 [296/764] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:27.308 [297/764] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:27.308 [298/764] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:27.308 [299/764] Linking static target lib/librte_eventdev.a 00:02:27.308 [300/764] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:27.308 [301/764] Generating symbol file lib/librte_ethdev.so.25.0.p/librte_ethdev.so.25.0.symbols 00:02:27.308 [302/764] Linking target lib/librte_metrics.so.25.0 00:02:27.308 [303/764] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:27.308 [304/764] Linking target lib/librte_bpf.so.25.0 00:02:27.308 [305/764] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:27.566 [306/764] Generating symbol file lib/librte_metrics.so.25.0.p/librte_metrics.so.25.0.symbols 00:02:27.566 [307/764] Linking target lib/librte_gro.so.25.0 00:02:27.566 [308/764] Linking target lib/librte_bitratestats.so.25.0 00:02:27.566 [309/764] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:27.566 [310/764] Linking static target lib/librte_jobstats.a 00:02:27.566 [311/764] Linking static target lib/librte_ip_frag.a 00:02:27.566 [312/764] Generating symbol file lib/librte_bpf.so.25.0.p/librte_bpf.so.25.0.symbols 00:02:27.566 [313/764] Linking target lib/librte_gso.so.25.0 00:02:27.566 [314/764] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:27.566 [315/764] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:27.566 [316/764] Linking static target lib/librte_latencystats.a 00:02:27.824 [317/764] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.824 [318/764] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.824 [319/764] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.824 [320/764] Linking target lib/librte_ip_frag.so.25.0 00:02:27.824 [321/764] Linking target lib/librte_jobstats.so.25.0 00:02:27.824 [322/764] Linking target lib/librte_latencystats.so.25.0 00:02:27.824 [323/764] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:27.824 [324/764] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:27.824 [325/764] Generating symbol file lib/librte_ip_frag.so.25.0.p/librte_ip_frag.so.25.0.symbols 00:02:27.824 [326/764] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:27.824 [327/764] Linking static target lib/librte_lpm.a 00:02:27.824 [328/764] Compiling C object lib/librte_power.a.p/power_rte_power_qos.c.o 00:02:28.088 [329/764] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:02:28.088 [330/764] Compiling C object lib/librte_power.a.p/power_rte_power_cpufreq.c.o 00:02:28.088 [331/764] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:28.088 [332/764] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.088 [333/764] Linking target lib/librte_lpm.so.25.0 00:02:28.369 [334/764] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:28.369 [335/764] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:28.369 [336/764] Linking static target lib/librte_power.a 00:02:28.369 [337/764] Generating symbol file lib/librte_lpm.so.25.0.p/librte_lpm.so.25.0.symbols 00:02:28.369 [338/764] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:28.369 [339/764] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:28.369 [340/764] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:28.369 [341/764] Linking static target lib/librte_pcapng.a 00:02:28.369 [342/764] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:28.369 [343/764] Linking static target lib/librte_rawdev.a 00:02:28.628 [344/764] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:28.628 [345/764] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:28.628 [346/764] Linking static target lib/librte_regexdev.a 00:02:28.628 [347/764] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:28.628 [348/764] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.628 [349/764] Linking target lib/librte_pcapng.so.25.0 00:02:28.628 [350/764] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:28.886 [351/764] Generating symbol file lib/librte_pcapng.so.25.0.p/librte_pcapng.so.25.0.symbols 00:02:28.886 [352/764] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.886 [353/764] Linking target lib/librte_rawdev.so.25.0 00:02:28.887 [354/764] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.887 [355/764] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:28.887 [356/764] Linking static target lib/librte_mldev.a 00:02:28.887 [357/764] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.887 [358/764] Linking target lib/librte_eventdev.so.25.0 00:02:28.887 [359/764] Linking target lib/librte_power.so.25.0 00:02:29.145 [360/764] Generating symbol file lib/librte_eventdev.so.25.0.p/librte_eventdev.so.25.0.symbols 00:02:29.145 [361/764] Generating symbol file lib/librte_power.so.25.0.p/librte_power.so.25.0.symbols 00:02:29.145 [362/764] Linking target lib/librte_dispatcher.so.25.0 00:02:29.145 [363/764] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:29.145 [364/764] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:29.145 [365/764] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.145 [366/764] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:29.145 [367/764] Linking target lib/librte_regexdev.so.25.0 00:02:29.145 [368/764] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:29.145 [369/764] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:29.145 [370/764] Linking static target lib/librte_reorder.a 00:02:29.145 [371/764] Linking static target lib/librte_rib.a 00:02:29.403 [372/764] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:29.403 [373/764] Linking static target lib/librte_member.a 00:02:29.403 [374/764] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:29.403 [375/764] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:29.403 [376/764] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:29.404 [377/764] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:29.404 [378/764] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.404 [379/764] Linking static target lib/librte_stack.a 00:02:29.662 [380/764] Linking target lib/librte_reorder.so.25.0 00:02:29.662 [381/764] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.662 [382/764] Linking target lib/librte_member.so.25.0 00:02:29.662 [383/764] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.662 [384/764] Linking target lib/librte_rib.so.25.0 00:02:29.662 [385/764] Generating symbol file lib/librte_reorder.so.25.0.p/librte_reorder.so.25.0.symbols 00:02:29.662 [386/764] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:29.662 [387/764] Linking static target lib/librte_security.a 00:02:29.662 [388/764] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.662 [389/764] Linking target lib/librte_stack.so.25.0 00:02:29.662 [390/764] Generating symbol file lib/librte_rib.so.25.0.p/librte_rib.so.25.0.symbols 00:02:29.662 [391/764] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:29.921 [392/764] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:29.921 [393/764] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.921 [394/764] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:29.921 [395/764] Linking target lib/librte_security.so.25.0 00:02:30.180 [396/764] Generating symbol file lib/librte_security.so.25.0.p/librte_security.so.25.0.symbols 00:02:30.180 [397/764] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:30.180 [398/764] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.180 [399/764] Linking target lib/librte_mldev.so.25.0 00:02:30.180 [400/764] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:30.180 [401/764] Linking static target lib/librte_sched.a 00:02:30.439 [402/764] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:30.439 [403/764] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.439 [404/764] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:30.439 [405/764] Linking target lib/librte_sched.so.25.0 00:02:30.697 [406/764] Generating symbol file lib/librte_sched.so.25.0.p/librte_sched.so.25.0.symbols 00:02:30.697 [407/764] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:30.697 [408/764] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:30.697 [409/764] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:30.955 [410/764] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:30.955 [411/764] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:31.212 [412/764] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:31.212 [413/764] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:31.212 [414/764] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:31.212 [415/764] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:31.212 [416/764] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:31.470 [417/764] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:31.470 [418/764] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:31.470 [419/764] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:31.729 [420/764] Compiling C object lib/librte_port.a.p/port_port_log.c.o 00:02:31.729 [421/764] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:31.987 [422/764] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:31.987 [423/764] Linking static target lib/librte_ipsec.a 00:02:31.987 [424/764] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:32.245 [425/764] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.245 [426/764] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:32.245 [427/764] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:32.245 [428/764] Linking target lib/librte_ipsec.so.25.0 00:02:32.245 [429/764] Generating symbol file lib/librte_ipsec.so.25.0.p/librte_ipsec.so.25.0.symbols 00:02:32.245 [430/764] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:32.245 [431/764] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:32.245 [432/764] Linking static target lib/librte_pdcp.a 00:02:32.245 [433/764] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:32.245 [434/764] Linking static target lib/librte_fib.a 00:02:32.245 [435/764] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:32.503 [436/764] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.503 [437/764] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.503 [438/764] Linking target lib/librte_pdcp.so.25.0 00:02:32.503 [439/764] Linking target lib/librte_fib.so.25.0 00:02:32.760 [440/764] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:32.760 [441/764] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:32.760 [442/764] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:33.019 [443/764] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:33.019 [444/764] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:33.019 [445/764] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:33.278 [446/764] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:33.278 [447/764] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:33.278 [448/764] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:33.278 [449/764] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:33.536 [450/764] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:33.536 [451/764] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:33.536 [452/764] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:33.536 [453/764] Linking static target lib/librte_port.a 00:02:33.536 [454/764] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:33.793 [455/764] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:33.793 [456/764] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:33.793 [457/764] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:33.793 [458/764] Linking static target lib/librte_pdump.a 00:02:33.793 [459/764] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:33.793 [460/764] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:33.793 [461/764] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:34.051 [462/764] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.051 [463/764] Linking target lib/librte_port.so.25.0 00:02:34.051 [464/764] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.051 [465/764] Linking target lib/librte_pdump.so.25.0 00:02:34.051 [466/764] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:34.052 [467/764] Generating symbol file lib/librte_port.so.25.0.p/librte_port.so.25.0.symbols 00:02:34.052 [468/764] Compiling C object lib/librte_table.a.p/table_table_log.c.o 00:02:34.309 [469/764] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:34.309 [470/764] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:34.310 [471/764] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:34.310 [472/764] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:34.567 [473/764] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:34.567 [474/764] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:34.567 [475/764] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:34.567 [476/764] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:34.825 [477/764] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:34.825 [478/764] Linking static target lib/librte_table.a 00:02:34.825 [479/764] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:34.825 [480/764] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:35.084 [481/764] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:35.084 [482/764] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.084 [483/764] Linking target lib/librte_table.so.25.0 00:02:35.084 [484/764] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:35.341 [485/764] Generating symbol file lib/librte_table.so.25.0.p/librte_table.so.25.0.symbols 00:02:35.341 [486/764] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:35.342 [487/764] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:35.599 [488/764] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:35.599 [489/764] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:35.599 [490/764] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:35.599 [491/764] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:35.858 [492/764] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:35.858 [493/764] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:35.858 [494/764] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:35.858 [495/764] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:35.858 [496/764] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:35.858 [497/764] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:35.858 [498/764] Linking static target lib/librte_graph.a 00:02:36.117 [499/764] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:36.117 [500/764] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:36.375 [501/764] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.375 [502/764] Linking target lib/librte_graph.so.25.0 00:02:36.375 [503/764] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:36.633 [504/764] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:36.633 [505/764] Generating symbol file lib/librte_graph.so.25.0.p/librte_graph.so.25.0.symbols 00:02:36.633 [506/764] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:36.633 [507/764] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:36.633 [508/764] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:36.633 [509/764] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:36.633 [510/764] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:36.633 [511/764] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:36.633 [512/764] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:36.891 [513/764] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:36.891 [514/764] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:37.148 [515/764] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:37.148 [516/764] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:37.148 [517/764] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:37.148 [518/764] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:37.148 [519/764] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:37.148 [520/764] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:37.148 [521/764] Linking static target lib/librte_node.a 00:02:37.406 [522/764] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:37.406 [523/764] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:37.406 [524/764] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.406 [525/764] Linking target lib/librte_node.so.25.0 00:02:37.406 [526/764] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:37.406 [527/764] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:37.406 [528/764] Linking static target drivers/librte_bus_vdev.a 00:02:37.406 [529/764] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:37.406 [530/764] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:37.406 [531/764] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:37.406 [532/764] Compiling C object drivers/librte_bus_vdev.so.25.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:37.663 [533/764] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:37.663 [534/764] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.663 [535/764] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:37.663 [536/764] Linking target drivers/librte_bus_vdev.so.25.0 00:02:37.663 [537/764] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:37.663 [538/764] Compiling C object drivers/librte_bus_pci.so.25.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:37.663 [539/764] Linking static target drivers/librte_bus_pci.a 00:02:37.663 [540/764] Generating symbol file drivers/librte_bus_vdev.so.25.0.p/librte_bus_vdev.so.25.0.symbols 00:02:37.663 [541/764] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:37.920 [542/764] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:37.920 [543/764] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:37.920 [544/764] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:37.920 [545/764] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:37.920 [546/764] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:37.920 [547/764] Linking static target drivers/librte_mempool_ring.a 00:02:37.920 [548/764] Compiling C object drivers/librte_mempool_ring.so.25.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:37.920 [549/764] Linking target drivers/librte_mempool_ring.so.25.0 00:02:38.178 [550/764] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.178 [551/764] Linking target drivers/librte_bus_pci.so.25.0 00:02:38.178 [552/764] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:38.178 [553/764] Generating symbol file drivers/librte_bus_pci.so.25.0.p/librte_bus_pci.so.25.0.symbols 00:02:38.436 [554/764] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:38.436 [555/764] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:38.436 [556/764] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:39.050 [557/764] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:39.050 [558/764] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:39.050 [559/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:39.307 [560/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:39.307 [561/764] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:39.307 [562/764] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:39.565 [563/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:39.565 [564/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:39.565 [565/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:39.823 [566/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:39.823 [567/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:40.081 [568/764] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:40.081 [569/764] Compiling C object drivers/libtmp_rte_power_acpi.a.p/power_acpi_acpi_cpufreq.c.o 00:02:40.081 [570/764] Linking static target drivers/libtmp_rte_power_acpi.a 00:02:40.081 [571/764] Compiling C object drivers/libtmp_rte_power_amd_pstate.a.p/power_amd_pstate_amd_pstate_cpufreq.c.o 00:02:40.081 [572/764] Linking static target drivers/libtmp_rte_power_amd_pstate.a 00:02:40.081 [573/764] Generating drivers/rte_power_acpi.pmd.c with a custom command 00:02:40.081 [574/764] Compiling C object drivers/librte_power_acpi.a.p/meson-generated_.._rte_power_acpi.pmd.c.o 00:02:40.081 [575/764] Linking static target drivers/librte_power_acpi.a 00:02:40.081 [576/764] Compiling C object drivers/librte_power_acpi.so.25.0.p/meson-generated_.._rte_power_acpi.pmd.c.o 00:02:40.081 [577/764] Linking target drivers/librte_power_acpi.so.25.0 00:02:40.338 [578/764] Generating drivers/rte_power_amd_pstate.pmd.c with a custom command 00:02:40.339 [579/764] Compiling C object drivers/librte_power_amd_pstate.a.p/meson-generated_.._rte_power_amd_pstate.pmd.c.o 00:02:40.339 [580/764] Linking static target drivers/librte_power_amd_pstate.a 00:02:40.339 [581/764] Compiling C object drivers/librte_power_amd_pstate.so.25.0.p/meson-generated_.._rte_power_amd_pstate.pmd.c.o 00:02:40.339 [582/764] Linking target drivers/librte_power_amd_pstate.so.25.0 00:02:40.339 [583/764] Compiling C object drivers/libtmp_rte_power_kvm_vm.a.p/power_kvm_vm_guest_channel.c.o 00:02:40.339 [584/764] Compiling C object drivers/libtmp_rte_power_kvm_vm.a.p/power_kvm_vm_kvm_vm.c.o 00:02:40.339 [585/764] Linking static target drivers/libtmp_rte_power_kvm_vm.a 00:02:40.339 [586/764] Compiling C object drivers/libtmp_rte_power_cppc.a.p/power_cppc_cppc_cpufreq.c.o 00:02:40.339 [587/764] Linking static target drivers/libtmp_rte_power_cppc.a 00:02:40.339 [588/764] Compiling C object drivers/libtmp_rte_power_intel_pstate.a.p/power_intel_pstate_intel_pstate_cpufreq.c.o 00:02:40.339 [589/764] Linking static target drivers/libtmp_rte_power_intel_pstate.a 00:02:40.597 [590/764] Generating drivers/rte_power_kvm_vm.pmd.c with a custom command 00:02:40.597 [591/764] Compiling C object drivers/librte_power_kvm_vm.a.p/meson-generated_.._rte_power_kvm_vm.pmd.c.o 00:02:40.597 [592/764] Linking static target drivers/librte_power_kvm_vm.a 00:02:40.597 [593/764] Generating drivers/rte_power_intel_pstate.pmd.c with a custom command 00:02:40.597 [594/764] Compiling C object drivers/librte_power_intel_pstate.a.p/meson-generated_.._rte_power_intel_pstate.pmd.c.o 00:02:40.597 [595/764] Generating drivers/rte_power_cppc.pmd.c with a custom command 00:02:40.597 [596/764] Linking static target drivers/librte_power_intel_pstate.a 00:02:40.597 [597/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:40.597 [598/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:40.597 [599/764] Compiling C object drivers/librte_power_cppc.a.p/meson-generated_.._rte_power_cppc.pmd.c.o 00:02:40.597 [600/764] Compiling C object drivers/librte_power_intel_pstate.so.25.0.p/meson-generated_.._rte_power_intel_pstate.pmd.c.o 00:02:40.597 [601/764] Compiling C object drivers/librte_power_cppc.so.25.0.p/meson-generated_.._rte_power_cppc.pmd.c.o 00:02:40.597 [602/764] Linking static target drivers/librte_power_cppc.a 00:02:40.597 [603/764] Compiling C object drivers/librte_power_kvm_vm.so.25.0.p/meson-generated_.._rte_power_kvm_vm.pmd.c.o 00:02:40.597 [604/764] Linking target drivers/librte_power_cppc.so.25.0 00:02:40.597 [605/764] Linking target drivers/librte_power_intel_pstate.so.25.0 00:02:40.597 [606/764] Compiling C object drivers/libtmp_rte_power_intel_uncore.a.p/power_intel_uncore_intel_uncore.c.o 00:02:40.597 [607/764] Generating drivers/rte_power_kvm_vm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.597 [608/764] Linking static target drivers/libtmp_rte_power_intel_uncore.a 00:02:40.597 [609/764] Linking target drivers/librte_power_kvm_vm.so.25.0 00:02:40.855 [610/764] Generating app/graph/commands_hdr with a custom command (wrapped by meson to capture output) 00:02:40.855 [611/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:40.855 [612/764] Generating drivers/rte_power_intel_uncore.pmd.c with a custom command 00:02:40.855 [613/764] Compiling C object drivers/librte_power_intel_uncore.a.p/meson-generated_.._rte_power_intel_uncore.pmd.c.o 00:02:40.855 [614/764] Linking static target drivers/librte_power_intel_uncore.a 00:02:40.855 [615/764] Compiling C object drivers/librte_power_intel_uncore.so.25.0.p/meson-generated_.._rte_power_intel_uncore.pmd.c.o 00:02:40.855 [616/764] Linking target drivers/librte_power_intel_uncore.so.25.0 00:02:41.114 [617/764] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:41.114 [618/764] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:41.114 [619/764] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:41.372 [620/764] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:41.372 [621/764] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:41.372 [622/764] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:41.372 [623/764] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:41.372 [624/764] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:41.372 [625/764] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:41.629 [626/764] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:41.629 [627/764] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:41.629 [628/764] Compiling C object app/dpdk-graph.p/graph_l2fwd.c.o 00:02:41.629 [629/764] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:41.887 [630/764] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:41.887 [631/764] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:41.887 [632/764] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:41.887 [633/764] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:41.887 [634/764] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:41.887 [635/764] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:41.887 [636/764] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:42.145 [637/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:42.145 [638/764] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:42.145 [639/764] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:42.145 [640/764] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:42.145 [641/764] Linking static target lib/librte_vhost.a 00:02:42.403 [642/764] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:42.403 [643/764] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:42.403 [644/764] Linking static target drivers/librte_net_i40e.a 00:02:42.403 [645/764] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:42.403 [646/764] Compiling C object drivers/librte_net_i40e.so.25.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:42.403 [647/764] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:42.403 [648/764] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:42.661 [649/764] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:42.661 [650/764] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:42.661 [651/764] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:42.918 [652/764] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:42.918 [653/764] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.918 [654/764] Linking target drivers/librte_net_i40e.so.25.0 00:02:42.918 [655/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:43.177 [656/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:43.177 [657/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:43.177 [658/764] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.177 [659/764] Linking target lib/librte_vhost.so.25.0 00:02:43.177 [660/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:43.177 [661/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:43.435 [662/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:43.435 [663/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:43.435 [664/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:43.435 [665/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:43.435 [666/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:43.435 [667/764] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:43.435 [668/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:43.695 [669/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:43.695 [670/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:43.695 [671/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:43.964 [672/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:43.964 [673/764] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:43.964 [674/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:44.237 [675/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:44.237 [676/764] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:44.237 [677/764] Linking static target lib/librte_pipeline.a 00:02:44.495 [678/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:44.753 [679/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:44.753 [680/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:44.753 [681/764] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:44.753 [682/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:44.753 [683/764] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:44.753 [684/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:44.753 [685/764] Linking target app/dpdk-dumpcap 00:02:45.010 [686/764] Linking target app/dpdk-pdump 00:02:45.010 [687/764] Linking target app/dpdk-graph 00:02:45.010 [688/764] Linking target app/dpdk-proc-info 00:02:45.010 [689/764] Linking target app/dpdk-test-acl 00:02:45.011 [690/764] Linking target app/dpdk-test-compress-perf 00:02:45.011 [691/764] Linking target app/dpdk-test-cmdline 00:02:45.011 [692/764] Linking target app/dpdk-test-crypto-perf 00:02:45.268 [693/764] Linking target app/dpdk-test-dma-perf 00:02:45.268 [694/764] Linking target app/dpdk-test-fib 00:02:45.268 [695/764] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:45.268 [696/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:02:45.268 [697/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:02:45.526 [698/764] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:45.526 [699/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:02:45.526 [700/764] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:45.526 [701/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:45.526 [702/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:45.784 [703/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:45.784 [704/764] Linking target app/dpdk-test-gpudev 00:02:45.784 [705/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:45.784 [706/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:45.784 [707/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:46.042 [708/764] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:46.042 [709/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:46.042 [710/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:46.042 [711/764] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:46.042 [712/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:46.300 [713/764] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:46.300 [714/764] Linking target app/dpdk-test-flow-perf 00:02:46.300 [715/764] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:46.300 [716/764] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:46.300 [717/764] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.300 [718/764] Linking target app/dpdk-test-eventdev 00:02:46.300 [719/764] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:46.300 [720/764] Linking target lib/librte_pipeline.so.25.0 00:02:46.300 [721/764] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:46.557 [722/764] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:46.557 [723/764] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:46.557 [724/764] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:46.557 [725/764] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:46.815 [726/764] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:46.815 [727/764] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:46.815 [728/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:46.815 [729/764] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:46.815 [730/764] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:47.072 [731/764] Linking target app/dpdk-test-bbdev 00:02:47.072 [732/764] Linking target app/dpdk-test-pipeline 00:02:47.072 [733/764] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:47.072 [734/764] Linking target app/dpdk-test-mldev 00:02:47.330 [735/764] Compiling C object app/dpdk-testpmd.p/test-pmd_hairpin.c.o 00:02:47.330 [736/764] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:47.330 [737/764] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:47.330 [738/764] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:47.587 [739/764] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:47.587 [740/764] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:47.587 [741/764] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:47.844 [742/764] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:47.844 [743/764] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:47.844 [744/764] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:47.844 [745/764] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:48.102 [746/764] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:48.102 [747/764] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:48.360 [748/764] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:48.360 [749/764] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:48.360 [750/764] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:48.618 [751/764] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:48.618 [752/764] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:48.618 [753/764] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:48.876 [754/764] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:48.876 [755/764] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:48.876 [756/764] Linking target app/dpdk-test-sad 00:02:48.876 [757/764] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:48.876 [758/764] Compiling C object app/dpdk-test-security-perf.p/test_test_security_proto.c.o 00:02:48.876 [759/764] Linking target app/dpdk-test-regex 00:02:48.876 [760/764] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:49.133 [761/764] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:49.391 [762/764] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:49.391 [763/764] Linking target app/dpdk-test-security-perf 00:02:49.957 [764/764] Linking target app/dpdk-testpmd 00:02:49.957 09:20:17 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s 00:02:49.957 09:20:17 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:49.957 09:20:17 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:02:49.957 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:49.957 [0/1] Installing files. 00:02:50.219 Installing subdir /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:02:50.219 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/counters.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:02:50.219 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/cpu.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:02:50.219 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/memory.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:02:50.219 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:02:50.219 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:50.219 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:50.219 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:50.219 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:50.219 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:50.219 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:50.219 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:50.219 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:50.219 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:50.219 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:50.219 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:50.219 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:50.219 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:50.219 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:50.219 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:50.219 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:50.219 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:02:50.219 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:02:50.219 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:02:50.219 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:02:50.219 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:50.219 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:50.219 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:50.219 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:50.219 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:02:50.219 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:50.219 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:50.219 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:50.219 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:50.219 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:50.219 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:50.219 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:50.219 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:50.219 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:50.219 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:50.219 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:50.219 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:50.219 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.219 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.219 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.219 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.219 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.219 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.219 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.219 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.219 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.219 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_eddsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.219 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.219 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.219 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.219 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.219 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.219 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.219 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.219 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_skeleton.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_gre.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_gre.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_ipv4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_ipv4.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_mpls.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_mpls.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.221 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.222 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.222 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.222 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.222 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.222 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.222 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.222 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.222 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.222 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.222 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.222 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.222 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.222 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.222 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.222 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.222 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.222 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.222 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.222 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.222 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.222 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.222 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.222 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.222 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.222 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.222 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.222 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:50.222 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:50.222 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:02:50.222 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:50.222 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:50.222 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:50.222 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:50.222 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:50.222 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:50.222 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:50.222 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:50.222 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:50.222 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:50.222 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:50.222 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:50.222 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:50.222 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:50.222 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:50.222 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:50.222 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:50.222 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:50.222 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:50.222 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:50.222 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:50.222 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:50.222 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:50.222 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:50.222 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:50.222 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:50.222 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:50.223 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:50.223 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:50.223 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:50.223 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:50.223 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:50.223 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:50.223 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:50.223 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:50.223 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:50.223 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.223 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.223 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.223 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.223 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.223 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.223 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.223 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.223 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.223 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.223 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.223 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.223 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.223 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.223 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.223 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.223 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.223 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.223 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.223 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.223 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.223 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.223 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.223 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.223 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.223 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.223 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.223 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.223 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.223 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.223 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.223 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.223 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.223 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.223 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.223 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.223 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.223 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.223 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.223 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.223 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.223 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.223 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.223 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:50.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:50.223 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:50.223 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:50.224 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:50.224 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:50.224 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:50.224 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.224 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.224 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.224 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.224 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.224 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.224 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.224 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.224 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.224 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.224 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.224 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.224 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.224 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.224 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:50.224 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:50.224 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:02:50.224 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:50.224 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:50.224 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:50.224 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:50.224 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:50.224 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:50.224 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:50.224 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:50.224 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:50.224 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:50.224 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:50.224 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:50.224 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:50.224 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:50.224 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:50.224 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:50.224 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:50.224 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:50.224 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:50.224 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:50.224 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:50.224 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:50.224 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:50.224 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:50.224 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:50.224 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:50.224 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:50.224 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:50.224 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:50.224 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:50.224 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:50.224 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.224 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.224 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.225 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.225 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.225 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.225 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.225 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.225 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.225 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.225 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.225 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.225 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.225 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.225 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.225 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:50.225 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:50.225 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:50.225 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:50.225 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:50.225 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:50.225 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:50.225 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:50.225 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:50.225 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:50.225 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.225 Installing lib/librte_log.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.225 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.225 Installing lib/librte_kvargs.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.225 Installing lib/librte_argparse.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.225 Installing lib/librte_argparse.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.225 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.225 Installing lib/librte_telemetry.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.225 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.225 Installing lib/librte_eal.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.225 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.225 Installing lib/librte_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.225 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.225 Installing lib/librte_rcu.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.225 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.225 Installing lib/librte_mempool.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.225 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.225 Installing lib/librte_mbuf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.225 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.225 Installing lib/librte_net.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.225 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.225 Installing lib/librte_meter.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.225 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.225 Installing lib/librte_ethdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.225 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.225 Installing lib/librte_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.225 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.225 Installing lib/librte_cmdline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.225 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.225 Installing lib/librte_metrics.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.225 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.225 Installing lib/librte_hash.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.225 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.225 Installing lib/librte_timer.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.225 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.225 Installing lib/librte_acl.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.225 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.225 Installing lib/librte_bbdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.225 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.226 Installing lib/librte_bitratestats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.226 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.226 Installing lib/librte_bpf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.226 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.226 Installing lib/librte_cfgfile.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.226 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.226 Installing lib/librte_compressdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.226 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.226 Installing lib/librte_cryptodev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.226 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.226 Installing lib/librte_distributor.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.226 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.226 Installing lib/librte_dmadev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.226 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.226 Installing lib/librte_efd.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.226 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.226 Installing lib/librte_eventdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.226 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.226 Installing lib/librte_dispatcher.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.226 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.226 Installing lib/librte_gpudev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.226 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.226 Installing lib/librte_gro.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.226 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.226 Installing lib/librte_gso.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.226 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.226 Installing lib/librte_ip_frag.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.226 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.226 Installing lib/librte_jobstats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.226 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.226 Installing lib/librte_latencystats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.226 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.226 Installing lib/librte_lpm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.226 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.226 Installing lib/librte_member.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.226 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.226 Installing lib/librte_pcapng.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.226 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.226 Installing lib/librte_power.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.226 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.226 Installing lib/librte_rawdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.226 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.487 Installing lib/librte_regexdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.487 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.487 Installing lib/librte_mldev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.487 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.487 Installing lib/librte_rib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.487 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.487 Installing lib/librte_reorder.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.487 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.487 Installing lib/librte_sched.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.487 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.487 Installing lib/librte_security.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.487 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.487 Installing lib/librte_stack.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.487 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.487 Installing lib/librte_vhost.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.487 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.487 Installing lib/librte_ipsec.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.488 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.488 Installing lib/librte_pdcp.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.488 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.488 Installing lib/librte_fib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.488 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.488 Installing lib/librte_port.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.488 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.488 Installing lib/librte_pdump.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.488 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.488 Installing lib/librte_table.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.488 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.488 Installing lib/librte_pipeline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.488 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.488 Installing lib/librte_graph.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.488 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.488 Installing lib/librte_node.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.488 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.488 Installing drivers/librte_bus_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:02:50.488 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.488 Installing drivers/librte_bus_vdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:02:50.488 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.488 Installing drivers/librte_mempool_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:02:50.488 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.488 Installing drivers/librte_net_i40e.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:02:50.488 Installing drivers/librte_power_acpi.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.488 Installing drivers/librte_power_acpi.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:02:50.488 Installing drivers/librte_power_amd_pstate.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.488 Installing drivers/librte_power_amd_pstate.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:02:50.488 Installing drivers/librte_power_cppc.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.488 Installing drivers/librte_power_cppc.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:02:50.488 Installing drivers/librte_power_intel_pstate.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.488 Installing drivers/librte_power_intel_pstate.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:02:50.488 Installing drivers/librte_power_intel_uncore.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.488 Installing drivers/librte_power_intel_uncore.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:02:50.488 Installing drivers/librte_power_kvm_vm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.488 Installing drivers/librte_power_kvm_vm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:02:50.488 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.488 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.488 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.488 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.488 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.488 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.488 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.488 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.488 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.488 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.488 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.488 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.488 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.488 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.488 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.488 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.488 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.488 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.488 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.488 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.488 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.488 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.488 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.488 Installing /home/vagrant/spdk_repo/dpdk/lib/argparse/rte_argparse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.488 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.488 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:50.488 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:50.488 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:50.488 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:50.488 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:50.488 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:50.488 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:50.488 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:50.488 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:50.488 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:50.488 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:50.488 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:50.488 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.488 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.488 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.488 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.488 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.488 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.488 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.488 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.488 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.488 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.488 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.488 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.488 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.488 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.488 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.488 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.488 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.488 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.488 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.488 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.488 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitset.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.488 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.488 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.488 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.488 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.488 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.488 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore_var.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/ptr_compress/rte_ptr_compress.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_cksum.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip4.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.489 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/power/power_cpufreq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/power/power_uncore_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_cpufreq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_qos.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.490 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/drivers/power/kvm_vm/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry-exporter.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:50.491 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:50.491 Installing symlink pointing to librte_log.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.25 00:02:50.491 Installing symlink pointing to librte_log.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:02:50.491 Installing symlink pointing to librte_kvargs.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.25 00:02:50.491 Installing symlink pointing to librte_kvargs.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:02:50.491 Installing symlink pointing to librte_argparse.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so.25 00:02:50.491 Installing symlink pointing to librte_argparse.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so 00:02:50.491 Installing symlink pointing to librte_telemetry.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.25 00:02:50.491 Installing symlink pointing to librte_telemetry.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:02:50.491 Installing symlink pointing to librte_eal.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.25 00:02:50.491 Installing symlink pointing to librte_eal.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:02:50.491 Installing symlink pointing to librte_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.25 00:02:50.491 Installing symlink pointing to librte_ring.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:02:50.491 Installing symlink pointing to librte_rcu.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.25 00:02:50.491 Installing symlink pointing to librte_rcu.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:02:50.491 Installing symlink pointing to librte_mempool.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.25 00:02:50.491 Installing symlink pointing to librte_mempool.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:02:50.491 Installing symlink pointing to librte_mbuf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.25 00:02:50.491 Installing symlink pointing to librte_mbuf.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:02:50.492 Installing symlink pointing to librte_net.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.25 00:02:50.492 Installing symlink pointing to librte_net.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:02:50.492 Installing symlink pointing to librte_meter.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.25 00:02:50.492 Installing symlink pointing to librte_meter.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:02:50.492 Installing symlink pointing to librte_ethdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.25 00:02:50.492 Installing symlink pointing to librte_ethdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:02:50.492 Installing symlink pointing to librte_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.25 00:02:50.492 Installing symlink pointing to librte_pci.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:02:50.492 Installing symlink pointing to librte_cmdline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.25 00:02:50.492 Installing symlink pointing to librte_cmdline.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:02:50.492 Installing symlink pointing to librte_metrics.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.25 00:02:50.492 Installing symlink pointing to librte_metrics.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:02:50.492 Installing symlink pointing to librte_hash.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.25 00:02:50.492 Installing symlink pointing to librte_hash.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:02:50.492 Installing symlink pointing to librte_timer.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.25 00:02:50.492 Installing symlink pointing to librte_timer.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:02:50.492 Installing symlink pointing to librte_acl.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.25 00:02:50.492 Installing symlink pointing to librte_acl.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:02:50.492 Installing symlink pointing to librte_bbdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.25 00:02:50.492 Installing symlink pointing to librte_bbdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:02:50.492 Installing symlink pointing to librte_bitratestats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.25 00:02:50.492 Installing symlink pointing to librte_bitratestats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:02:50.492 Installing symlink pointing to librte_bpf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.25 00:02:50.492 Installing symlink pointing to librte_bpf.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:02:50.492 Installing symlink pointing to librte_cfgfile.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.25 00:02:50.492 Installing symlink pointing to librte_cfgfile.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:02:50.492 Installing symlink pointing to librte_compressdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.25 00:02:50.492 Installing symlink pointing to librte_compressdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:02:50.492 Installing symlink pointing to librte_cryptodev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.25 00:02:50.492 Installing symlink pointing to librte_cryptodev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:02:50.492 Installing symlink pointing to librte_distributor.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.25 00:02:50.492 Installing symlink pointing to librte_distributor.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:02:50.492 Installing symlink pointing to librte_dmadev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.25 00:02:50.492 Installing symlink pointing to librte_dmadev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:02:50.492 Installing symlink pointing to librte_efd.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.25 00:02:50.492 Installing symlink pointing to librte_efd.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:02:50.492 Installing symlink pointing to librte_eventdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.25 00:02:50.492 Installing symlink pointing to librte_eventdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:02:50.492 Installing symlink pointing to librte_dispatcher.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.25 00:02:50.492 Installing symlink pointing to librte_dispatcher.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:02:50.492 Installing symlink pointing to librte_gpudev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.25 00:02:50.492 Installing symlink pointing to librte_gpudev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:02:50.492 Installing symlink pointing to librte_gro.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.25 00:02:50.492 Installing symlink pointing to librte_gro.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:02:50.492 Installing symlink pointing to librte_gso.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.25 00:02:50.492 Installing symlink pointing to librte_gso.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:02:50.492 Installing symlink pointing to librte_ip_frag.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.25 00:02:50.492 Installing symlink pointing to librte_ip_frag.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:02:50.492 Installing symlink pointing to librte_jobstats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.25 00:02:50.492 Installing symlink pointing to librte_jobstats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:02:50.492 Installing symlink pointing to librte_latencystats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.25 00:02:50.492 Installing symlink pointing to librte_latencystats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:02:50.492 Installing symlink pointing to librte_lpm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.25 00:02:50.492 Installing symlink pointing to librte_lpm.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:02:50.492 Installing symlink pointing to librte_member.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.25 00:02:50.492 Installing symlink pointing to librte_member.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:02:50.492 Installing symlink pointing to librte_pcapng.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.25 00:02:50.492 Installing symlink pointing to librte_pcapng.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:02:50.492 Installing symlink pointing to librte_power.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.25 00:02:50.492 Installing symlink pointing to librte_power.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:02:50.492 Installing symlink pointing to librte_rawdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.25 00:02:50.492 Installing symlink pointing to librte_rawdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:02:50.492 Installing symlink pointing to librte_regexdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.25 00:02:50.492 Installing symlink pointing to librte_regexdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:02:50.492 Installing symlink pointing to librte_mldev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.25 00:02:50.492 Installing symlink pointing to librte_mldev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:02:50.492 Installing symlink pointing to librte_rib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.25 00:02:50.492 Installing symlink pointing to librte_rib.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:02:50.492 Installing symlink pointing to librte_reorder.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.25 00:02:50.492 Installing symlink pointing to librte_reorder.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:02:50.492 Installing symlink pointing to librte_sched.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.25 00:02:50.492 Installing symlink pointing to librte_sched.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:02:50.492 Installing symlink pointing to librte_security.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.25 00:02:50.492 Installing symlink pointing to librte_security.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:02:50.492 Installing symlink pointing to librte_stack.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.25 00:02:50.492 Installing symlink pointing to librte_stack.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:02:50.492 Installing symlink pointing to librte_vhost.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.25 00:02:50.493 Installing symlink pointing to librte_vhost.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:02:50.493 Installing symlink pointing to librte_ipsec.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.25 00:02:50.493 Installing symlink pointing to librte_ipsec.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:02:50.493 Installing symlink pointing to librte_pdcp.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.25 00:02:50.493 Installing symlink pointing to librte_pdcp.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:02:50.493 Installing symlink pointing to librte_fib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.25 00:02:50.493 Installing symlink pointing to librte_fib.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:02:50.493 Installing symlink pointing to librte_port.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.25 00:02:50.493 Installing symlink pointing to librte_port.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:02:50.493 Installing symlink pointing to librte_pdump.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.25 00:02:50.493 Installing symlink pointing to librte_pdump.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:02:50.493 Installing symlink pointing to librte_table.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.25 00:02:50.493 Installing symlink pointing to librte_table.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:02:50.493 Installing symlink pointing to librte_pipeline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.25 00:02:50.493 Installing symlink pointing to librte_pipeline.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:02:50.493 Installing symlink pointing to librte_graph.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.25 00:02:50.493 Installing symlink pointing to librte_graph.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:02:50.493 Installing symlink pointing to librte_node.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.25 00:02:50.493 Installing symlink pointing to librte_node.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:02:50.493 Installing symlink pointing to librte_bus_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so.25 00:02:50.493 Installing symlink pointing to librte_bus_pci.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so 00:02:50.493 Installing symlink pointing to librte_bus_vdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so.25 00:02:50.493 Installing symlink pointing to librte_bus_vdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so 00:02:50.493 Installing symlink pointing to librte_mempool_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so.25 00:02:50.751 './librte_bus_pci.so' -> 'dpdk/pmds-25.0/librte_bus_pci.so' 00:02:50.751 './librte_bus_pci.so.25' -> 'dpdk/pmds-25.0/librte_bus_pci.so.25' 00:02:50.751 './librte_bus_pci.so.25.0' -> 'dpdk/pmds-25.0/librte_bus_pci.so.25.0' 00:02:50.751 './librte_bus_vdev.so' -> 'dpdk/pmds-25.0/librte_bus_vdev.so' 00:02:50.751 './librte_bus_vdev.so.25' -> 'dpdk/pmds-25.0/librte_bus_vdev.so.25' 00:02:50.751 './librte_bus_vdev.so.25.0' -> 'dpdk/pmds-25.0/librte_bus_vdev.so.25.0' 00:02:50.751 './librte_mempool_ring.so' -> 'dpdk/pmds-25.0/librte_mempool_ring.so' 00:02:50.751 './librte_mempool_ring.so.25' -> 'dpdk/pmds-25.0/librte_mempool_ring.so.25' 00:02:50.751 './librte_mempool_ring.so.25.0' -> 'dpdk/pmds-25.0/librte_mempool_ring.so.25.0' 00:02:50.751 './librte_net_i40e.so' -> 'dpdk/pmds-25.0/librte_net_i40e.so' 00:02:50.751 './librte_net_i40e.so.25' -> 'dpdk/pmds-25.0/librte_net_i40e.so.25' 00:02:50.751 './librte_net_i40e.so.25.0' -> 'dpdk/pmds-25.0/librte_net_i40e.so.25.0' 00:02:50.751 './librte_power_acpi.so' -> 'dpdk/pmds-25.0/librte_power_acpi.so' 00:02:50.751 './librte_power_acpi.so.25' -> 'dpdk/pmds-25.0/librte_power_acpi.so.25' 00:02:50.751 './librte_power_acpi.so.25.0' -> 'dpdk/pmds-25.0/librte_power_acpi.so.25.0' 00:02:50.751 './librte_power_amd_pstate.so' -> 'dpdk/pmds-25.0/librte_power_amd_pstate.so' 00:02:50.751 './librte_power_amd_pstate.so.25' -> 'dpdk/pmds-25.0/librte_power_amd_pstate.so.25' 00:02:50.751 './librte_power_amd_pstate.so.25.0' -> 'dpdk/pmds-25.0/librte_power_amd_pstate.so.25.0' 00:02:50.751 './librte_power_cppc.so' -> 'dpdk/pmds-25.0/librte_power_cppc.so' 00:02:50.751 './librte_power_cppc.so.25' -> 'dpdk/pmds-25.0/librte_power_cppc.so.25' 00:02:50.751 './librte_power_cppc.so.25.0' -> 'dpdk/pmds-25.0/librte_power_cppc.so.25.0' 00:02:50.751 './librte_power_intel_pstate.so' -> 'dpdk/pmds-25.0/librte_power_intel_pstate.so' 00:02:50.751 './librte_power_intel_pstate.so.25' -> 'dpdk/pmds-25.0/librte_power_intel_pstate.so.25' 00:02:50.751 './librte_power_intel_pstate.so.25.0' -> 'dpdk/pmds-25.0/librte_power_intel_pstate.so.25.0' 00:02:50.751 './librte_power_intel_uncore.so' -> 'dpdk/pmds-25.0/librte_power_intel_uncore.so' 00:02:50.751 './librte_power_intel_uncore.so.25' -> 'dpdk/pmds-25.0/librte_power_intel_uncore.so.25' 00:02:50.751 './librte_power_intel_uncore.so.25.0' -> 'dpdk/pmds-25.0/librte_power_intel_uncore.so.25.0' 00:02:50.751 './librte_power_kvm_vm.so' -> 'dpdk/pmds-25.0/librte_power_kvm_vm.so' 00:02:50.751 './librte_power_kvm_vm.so.25' -> 'dpdk/pmds-25.0/librte_power_kvm_vm.so.25' 00:02:50.751 './librte_power_kvm_vm.so.25.0' -> 'dpdk/pmds-25.0/librte_power_kvm_vm.so.25.0' 00:02:50.751 Installing symlink pointing to librte_mempool_ring.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so 00:02:50.751 Installing symlink pointing to librte_net_i40e.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so.25 00:02:50.751 Installing symlink pointing to librte_net_i40e.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so 00:02:50.751 Installing symlink pointing to librte_power_acpi.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_acpi.so.25 00:02:50.751 Installing symlink pointing to librte_power_acpi.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_acpi.so 00:02:50.751 Installing symlink pointing to librte_power_amd_pstate.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_amd_pstate.so.25 00:02:50.751 Installing symlink pointing to librte_power_amd_pstate.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_amd_pstate.so 00:02:50.751 Installing symlink pointing to librte_power_cppc.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_cppc.so.25 00:02:50.751 Installing symlink pointing to librte_power_cppc.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_cppc.so 00:02:50.751 Installing symlink pointing to librte_power_intel_pstate.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_pstate.so.25 00:02:50.751 Installing symlink pointing to librte_power_intel_pstate.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_pstate.so 00:02:50.751 Installing symlink pointing to librte_power_intel_uncore.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_uncore.so.25 00:02:50.751 Installing symlink pointing to librte_power_intel_uncore.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_uncore.so 00:02:50.751 Installing symlink pointing to librte_power_kvm_vm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_kvm_vm.so.25 00:02:50.751 Installing symlink pointing to librte_power_kvm_vm.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_kvm_vm.so 00:02:50.752 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-25.0' 00:02:50.752 ************************************ 00:02:50.752 END TEST build_native_dpdk 00:02:50.752 ************************************ 00:02:50.752 09:20:18 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat 00:02:50.752 09:20:18 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:50.752 00:02:50.752 real 0m41.214s 00:02:50.752 user 4m44.320s 00:02:50.752 sys 0m42.989s 00:02:50.752 09:20:18 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:50.752 09:20:18 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:50.752 09:20:18 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:50.752 09:20:18 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:50.752 09:20:18 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:50.752 09:20:18 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:50.752 09:20:18 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:50.752 09:20:18 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:50.752 09:20:18 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:50.752 09:20:18 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:02:50.752 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:02:51.010 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.010 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:02:51.010 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:51.269 Using 'verbs' RDMA provider 00:03:02.615 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:12.579 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:12.579 Creating mk/config.mk...done. 00:03:12.579 Creating mk/cc.flags.mk...done. 00:03:12.579 Type 'make' to build. 00:03:12.579 09:20:39 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:12.579 09:20:39 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:12.579 09:20:39 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:12.579 09:20:39 -- common/autotest_common.sh@10 -- $ set +x 00:03:12.579 ************************************ 00:03:12.579 START TEST make 00:03:12.579 ************************************ 00:03:12.579 09:20:39 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:12.579 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:12.579 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:12.579 meson setup builddir \ 00:03:12.579 -Dwith-libaio=enabled \ 00:03:12.579 -Dwith-liburing=enabled \ 00:03:12.579 -Dwith-libvfn=disabled \ 00:03:12.579 -Dwith-spdk=disabled \ 00:03:12.579 -Dexamples=false \ 00:03:12.579 -Dtests=false \ 00:03:12.579 -Dtools=false && \ 00:03:12.579 meson compile -C builddir && \ 00:03:12.579 cd -) 00:03:12.579 make[1]: Nothing to be done for 'all'. 00:03:14.494 The Meson build system 00:03:14.494 Version: 1.5.0 00:03:14.494 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:14.494 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:14.494 Build type: native build 00:03:14.494 Project name: xnvme 00:03:14.494 Project version: 0.7.5 00:03:14.494 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:14.494 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:14.494 Host machine cpu family: x86_64 00:03:14.494 Host machine cpu: x86_64 00:03:14.494 Message: host_machine.system: linux 00:03:14.494 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:14.494 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:14.494 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:14.494 Run-time dependency threads found: YES 00:03:14.494 Has header "setupapi.h" : NO 00:03:14.494 Has header "linux/blkzoned.h" : YES 00:03:14.494 Has header "linux/blkzoned.h" : YES (cached) 00:03:14.494 Has header "libaio.h" : YES 00:03:14.494 Library aio found: YES 00:03:14.494 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:14.494 Run-time dependency liburing found: YES 2.2 00:03:14.494 Dependency libvfn skipped: feature with-libvfn disabled 00:03:14.494 Found CMake: /usr/bin/cmake (3.27.7) 00:03:14.494 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:03:14.494 Subproject spdk : skipped: feature with-spdk disabled 00:03:14.494 Run-time dependency appleframeworks found: NO (tried framework) 00:03:14.494 Run-time dependency appleframeworks found: NO (tried framework) 00:03:14.494 Library rt found: YES 00:03:14.494 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:14.494 Configuring xnvme_config.h using configuration 00:03:14.494 Configuring xnvme.spec using configuration 00:03:14.494 Run-time dependency bash-completion found: YES 2.11 00:03:14.494 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:14.494 Program cp found: YES (/usr/bin/cp) 00:03:14.494 Build targets in project: 3 00:03:14.494 00:03:14.494 xnvme 0.7.5 00:03:14.494 00:03:14.494 Subprojects 00:03:14.494 spdk : NO Feature 'with-spdk' disabled 00:03:14.494 00:03:14.494 User defined options 00:03:14.494 examples : false 00:03:14.494 tests : false 00:03:14.494 tools : false 00:03:14.494 with-libaio : enabled 00:03:14.494 with-liburing: enabled 00:03:14.494 with-libvfn : disabled 00:03:14.494 with-spdk : disabled 00:03:14.494 00:03:14.494 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:14.494 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:14.494 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:03:14.494 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:03:14.494 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:03:14.494 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:03:14.494 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:03:14.494 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:03:14.494 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:03:14.753 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:03:14.753 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:03:14.753 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:03:14.753 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:03:14.753 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:03:14.753 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:03:14.753 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:03:14.753 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:03:14.753 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:03:14.753 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:03:14.753 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:03:14.753 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:03:14.753 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:03:14.753 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:03:14.753 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:03:14.753 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:03:14.753 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:03:14.753 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:03:14.753 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:03:14.753 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:03:14.753 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:03:14.753 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:03:14.753 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:03:14.753 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:03:14.753 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:03:14.753 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:03:14.753 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:03:14.753 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:03:14.753 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:03:14.753 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:03:14.753 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:03:14.753 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:03:15.012 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:03:15.012 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:03:15.012 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:03:15.012 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:03:15.012 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:03:15.012 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:03:15.012 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:03:15.012 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:03:15.012 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:03:15.012 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:03:15.012 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:03:15.012 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:03:15.012 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:03:15.012 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:03:15.012 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:03:15.012 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:03:15.012 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:03:15.012 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:03:15.012 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:03:15.012 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:03:15.012 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:03:15.012 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:03:15.012 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:03:15.012 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:03:15.012 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:03:15.012 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:03:15.012 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:03:15.012 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:03:15.012 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:03:15.012 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:03:15.270 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:03:15.271 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:03:15.271 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:03:15.271 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:03:15.529 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:03:15.529 [75/76] Linking static target lib/libxnvme.a 00:03:15.529 [76/76] Linking target lib/libxnvme.so.0.7.5 00:03:15.529 INFO: autodetecting backend as ninja 00:03:15.529 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:15.529 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:03:47.635 CC lib/ut/ut.o 00:03:47.635 CC lib/log/log.o 00:03:47.635 CC lib/log/log_deprecated.o 00:03:47.635 CC lib/log/log_flags.o 00:03:47.635 CC lib/ut_mock/mock.o 00:03:47.635 LIB libspdk_log.a 00:03:47.635 LIB libspdk_ut.a 00:03:47.635 LIB libspdk_ut_mock.a 00:03:47.635 SO libspdk_log.so.7.1 00:03:47.635 SO libspdk_ut.so.2.0 00:03:47.635 SO libspdk_ut_mock.so.6.0 00:03:47.635 SYMLINK libspdk_ut_mock.so 00:03:47.635 SYMLINK libspdk_ut.so 00:03:47.635 SYMLINK libspdk_log.so 00:03:47.635 CC lib/dma/dma.o 00:03:47.635 CC lib/util/base64.o 00:03:47.635 CC lib/util/cpuset.o 00:03:47.635 CC lib/util/bit_array.o 00:03:47.635 CC lib/util/crc32.o 00:03:47.635 CC lib/util/crc16.o 00:03:47.635 CC lib/util/crc32c.o 00:03:47.635 CXX lib/trace_parser/trace.o 00:03:47.635 CC lib/ioat/ioat.o 00:03:47.635 CC lib/vfio_user/host/vfio_user_pci.o 00:03:47.635 CC lib/util/crc32_ieee.o 00:03:47.635 CC lib/util/crc64.o 00:03:47.635 CC lib/util/dif.o 00:03:47.635 LIB libspdk_dma.a 00:03:47.635 CC lib/util/fd.o 00:03:47.635 SO libspdk_dma.so.5.0 00:03:47.635 CC lib/util/fd_group.o 00:03:47.635 CC lib/util/file.o 00:03:47.635 SYMLINK libspdk_dma.so 00:03:47.635 CC lib/vfio_user/host/vfio_user.o 00:03:47.635 CC lib/util/hexlify.o 00:03:47.635 CC lib/util/iov.o 00:03:47.635 CC lib/util/math.o 00:03:47.635 LIB libspdk_ioat.a 00:03:47.635 CC lib/util/net.o 00:03:47.635 SO libspdk_ioat.so.7.0 00:03:47.635 CC lib/util/pipe.o 00:03:47.635 SYMLINK libspdk_ioat.so 00:03:47.635 CC lib/util/strerror_tls.o 00:03:47.635 CC lib/util/string.o 00:03:47.635 CC lib/util/uuid.o 00:03:47.635 CC lib/util/xor.o 00:03:47.635 LIB libspdk_vfio_user.a 00:03:47.635 CC lib/util/zipf.o 00:03:47.635 CC lib/util/md5.o 00:03:47.635 SO libspdk_vfio_user.so.5.0 00:03:47.635 SYMLINK libspdk_vfio_user.so 00:03:47.635 LIB libspdk_util.a 00:03:47.635 LIB libspdk_trace_parser.a 00:03:47.635 SO libspdk_trace_parser.so.6.0 00:03:47.635 SO libspdk_util.so.10.1 00:03:47.635 SYMLINK libspdk_trace_parser.so 00:03:47.635 SYMLINK libspdk_util.so 00:03:47.892 CC lib/env_dpdk/env.o 00:03:47.892 CC lib/env_dpdk/memory.o 00:03:47.892 CC lib/rdma_utils/rdma_utils.o 00:03:47.892 CC lib/env_dpdk/pci.o 00:03:47.892 CC lib/env_dpdk/init.o 00:03:47.892 CC lib/env_dpdk/threads.o 00:03:47.892 CC lib/vmd/vmd.o 00:03:47.892 CC lib/idxd/idxd.o 00:03:47.892 CC lib/json/json_parse.o 00:03:47.892 CC lib/conf/conf.o 00:03:47.892 CC lib/json/json_util.o 00:03:47.892 CC lib/json/json_write.o 00:03:47.892 LIB libspdk_rdma_utils.a 00:03:47.892 SO libspdk_rdma_utils.so.1.0 00:03:47.892 LIB libspdk_conf.a 00:03:48.149 SO libspdk_conf.so.6.0 00:03:48.149 SYMLINK libspdk_rdma_utils.so 00:03:48.149 SYMLINK libspdk_conf.so 00:03:48.149 CC lib/idxd/idxd_user.o 00:03:48.149 CC lib/env_dpdk/pci_ioat.o 00:03:48.149 CC lib/vmd/led.o 00:03:48.149 CC lib/idxd/idxd_kernel.o 00:03:48.149 CC lib/rdma_provider/common.o 00:03:48.149 LIB libspdk_json.a 00:03:48.149 SO libspdk_json.so.6.0 00:03:48.149 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:48.149 CC lib/env_dpdk/pci_virtio.o 00:03:48.149 CC lib/env_dpdk/pci_vmd.o 00:03:48.149 SYMLINK libspdk_json.so 00:03:48.149 CC lib/env_dpdk/pci_idxd.o 00:03:48.407 CC lib/env_dpdk/pci_event.o 00:03:48.407 CC lib/env_dpdk/sigbus_handler.o 00:03:48.407 CC lib/env_dpdk/pci_dpdk.o 00:03:48.407 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:48.407 LIB libspdk_rdma_provider.a 00:03:48.407 LIB libspdk_idxd.a 00:03:48.407 SO libspdk_rdma_provider.so.7.0 00:03:48.407 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:48.407 LIB libspdk_vmd.a 00:03:48.407 SO libspdk_idxd.so.12.1 00:03:48.407 SYMLINK libspdk_rdma_provider.so 00:03:48.407 SO libspdk_vmd.so.6.0 00:03:48.407 SYMLINK libspdk_idxd.so 00:03:48.407 SYMLINK libspdk_vmd.so 00:03:48.407 CC lib/jsonrpc/jsonrpc_client.o 00:03:48.407 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:48.407 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:48.407 CC lib/jsonrpc/jsonrpc_server.o 00:03:48.665 LIB libspdk_jsonrpc.a 00:03:48.924 SO libspdk_jsonrpc.so.6.0 00:03:48.924 SYMLINK libspdk_jsonrpc.so 00:03:49.181 CC lib/rpc/rpc.o 00:03:49.181 LIB libspdk_env_dpdk.a 00:03:49.181 SO libspdk_env_dpdk.so.15.1 00:03:49.439 LIB libspdk_rpc.a 00:03:49.439 SO libspdk_rpc.so.6.0 00:03:49.439 SYMLINK libspdk_env_dpdk.so 00:03:49.439 SYMLINK libspdk_rpc.so 00:03:49.698 CC lib/notify/notify.o 00:03:49.698 CC lib/notify/notify_rpc.o 00:03:49.698 CC lib/keyring/keyring_rpc.o 00:03:49.698 CC lib/keyring/keyring.o 00:03:49.698 CC lib/trace/trace_rpc.o 00:03:49.698 CC lib/trace/trace.o 00:03:49.698 CC lib/trace/trace_flags.o 00:03:49.698 LIB libspdk_notify.a 00:03:49.698 SO libspdk_notify.so.6.0 00:03:49.698 SYMLINK libspdk_notify.so 00:03:49.698 LIB libspdk_keyring.a 00:03:49.955 SO libspdk_keyring.so.2.0 00:03:49.955 LIB libspdk_trace.a 00:03:49.955 SO libspdk_trace.so.11.0 00:03:49.955 SYMLINK libspdk_keyring.so 00:03:49.955 SYMLINK libspdk_trace.so 00:03:50.213 CC lib/thread/thread.o 00:03:50.213 CC lib/thread/iobuf.o 00:03:50.213 CC lib/sock/sock.o 00:03:50.214 CC lib/sock/sock_rpc.o 00:03:50.472 LIB libspdk_sock.a 00:03:50.472 SO libspdk_sock.so.10.0 00:03:50.731 SYMLINK libspdk_sock.so 00:03:50.990 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:50.990 CC lib/nvme/nvme_ctrlr.o 00:03:50.990 CC lib/nvme/nvme_fabric.o 00:03:50.990 CC lib/nvme/nvme_ns.o 00:03:50.990 CC lib/nvme/nvme_pcie_common.o 00:03:50.990 CC lib/nvme/nvme_ns_cmd.o 00:03:50.990 CC lib/nvme/nvme.o 00:03:50.990 CC lib/nvme/nvme_pcie.o 00:03:50.990 CC lib/nvme/nvme_qpair.o 00:03:51.556 CC lib/nvme/nvme_quirks.o 00:03:51.556 CC lib/nvme/nvme_transport.o 00:03:51.556 CC lib/nvme/nvme_discovery.o 00:03:51.556 LIB libspdk_thread.a 00:03:51.556 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:51.815 SO libspdk_thread.so.11.0 00:03:51.815 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:51.815 CC lib/nvme/nvme_tcp.o 00:03:51.815 SYMLINK libspdk_thread.so 00:03:51.815 CC lib/nvme/nvme_opal.o 00:03:51.815 CC lib/nvme/nvme_io_msg.o 00:03:52.074 CC lib/nvme/nvme_poll_group.o 00:03:52.074 CC lib/nvme/nvme_zns.o 00:03:52.074 CC lib/nvme/nvme_stubs.o 00:03:52.334 CC lib/nvme/nvme_auth.o 00:03:52.334 CC lib/nvme/nvme_cuse.o 00:03:52.334 CC lib/accel/accel.o 00:03:52.334 CC lib/accel/accel_rpc.o 00:03:52.334 CC lib/accel/accel_sw.o 00:03:52.592 CC lib/nvme/nvme_rdma.o 00:03:52.592 CC lib/blob/blobstore.o 00:03:52.592 CC lib/blob/request.o 00:03:52.592 CC lib/init/json_config.o 00:03:52.851 CC lib/virtio/virtio.o 00:03:52.851 CC lib/fsdev/fsdev.o 00:03:52.851 CC lib/init/subsystem.o 00:03:53.109 CC lib/blob/zeroes.o 00:03:53.109 CC lib/virtio/virtio_vhost_user.o 00:03:53.109 CC lib/virtio/virtio_vfio_user.o 00:03:53.109 CC lib/virtio/virtio_pci.o 00:03:53.109 CC lib/init/subsystem_rpc.o 00:03:53.109 CC lib/blob/blob_bs_dev.o 00:03:53.109 CC lib/fsdev/fsdev_io.o 00:03:53.109 CC lib/init/rpc.o 00:03:53.367 CC lib/fsdev/fsdev_rpc.o 00:03:53.367 LIB libspdk_virtio.a 00:03:53.367 SO libspdk_virtio.so.7.0 00:03:53.367 LIB libspdk_init.a 00:03:53.367 SYMLINK libspdk_virtio.so 00:03:53.368 LIB libspdk_accel.a 00:03:53.368 SO libspdk_init.so.6.0 00:03:53.368 SO libspdk_accel.so.16.0 00:03:53.368 SYMLINK libspdk_init.so 00:03:53.626 SYMLINK libspdk_accel.so 00:03:53.626 LIB libspdk_fsdev.a 00:03:53.626 SO libspdk_fsdev.so.2.0 00:03:53.626 SYMLINK libspdk_fsdev.so 00:03:53.626 CC lib/event/app.o 00:03:53.626 CC lib/event/reactor.o 00:03:53.626 CC lib/event/app_rpc.o 00:03:53.626 CC lib/event/log_rpc.o 00:03:53.626 CC lib/event/scheduler_static.o 00:03:53.626 CC lib/bdev/bdev.o 00:03:53.626 CC lib/bdev/bdev_rpc.o 00:03:53.884 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:53.884 CC lib/bdev/bdev_zone.o 00:03:53.884 CC lib/bdev/part.o 00:03:53.884 LIB libspdk_nvme.a 00:03:53.884 CC lib/bdev/scsi_nvme.o 00:03:54.142 SO libspdk_nvme.so.15.0 00:03:54.142 LIB libspdk_event.a 00:03:54.142 SO libspdk_event.so.14.0 00:03:54.142 SYMLINK libspdk_event.so 00:03:54.400 LIB libspdk_fuse_dispatcher.a 00:03:54.400 SYMLINK libspdk_nvme.so 00:03:54.400 SO libspdk_fuse_dispatcher.so.1.0 00:03:54.400 SYMLINK libspdk_fuse_dispatcher.so 00:03:55.775 LIB libspdk_blob.a 00:03:56.034 SO libspdk_blob.so.12.0 00:03:56.034 SYMLINK libspdk_blob.so 00:03:56.293 CC lib/blobfs/blobfs.o 00:03:56.293 CC lib/blobfs/tree.o 00:03:56.293 CC lib/lvol/lvol.o 00:03:56.552 LIB libspdk_bdev.a 00:03:56.552 SO libspdk_bdev.so.17.0 00:03:56.811 SYMLINK libspdk_bdev.so 00:03:56.811 CC lib/scsi/dev.o 00:03:56.811 CC lib/scsi/lun.o 00:03:56.811 CC lib/scsi/port.o 00:03:56.811 CC lib/scsi/scsi.o 00:03:56.811 LIB libspdk_blobfs.a 00:03:56.811 CC lib/nvmf/ctrlr.o 00:03:56.811 CC lib/nbd/nbd.o 00:03:56.811 CC lib/ftl/ftl_core.o 00:03:56.811 CC lib/ublk/ublk.o 00:03:56.811 SO libspdk_blobfs.so.11.0 00:03:57.069 SYMLINK libspdk_blobfs.so 00:03:57.069 CC lib/ublk/ublk_rpc.o 00:03:57.069 CC lib/ftl/ftl_init.o 00:03:57.069 CC lib/scsi/scsi_bdev.o 00:03:57.069 CC lib/nbd/nbd_rpc.o 00:03:57.069 CC lib/nvmf/ctrlr_discovery.o 00:03:57.069 CC lib/ftl/ftl_layout.o 00:03:57.069 LIB libspdk_lvol.a 00:03:57.069 CC lib/ftl/ftl_debug.o 00:03:57.069 SO libspdk_lvol.so.11.0 00:03:57.328 SYMLINK libspdk_lvol.so 00:03:57.328 CC lib/ftl/ftl_io.o 00:03:57.328 CC lib/ftl/ftl_sb.o 00:03:57.328 CC lib/ftl/ftl_l2p.o 00:03:57.328 LIB libspdk_nbd.a 00:03:57.328 SO libspdk_nbd.so.7.0 00:03:57.328 CC lib/nvmf/ctrlr_bdev.o 00:03:57.328 SYMLINK libspdk_nbd.so 00:03:57.328 CC lib/ftl/ftl_l2p_flat.o 00:03:57.328 CC lib/ftl/ftl_nv_cache.o 00:03:57.328 CC lib/nvmf/subsystem.o 00:03:57.595 CC lib/ftl/ftl_band.o 00:03:57.595 CC lib/ftl/ftl_band_ops.o 00:03:57.595 LIB libspdk_ublk.a 00:03:57.595 CC lib/scsi/scsi_pr.o 00:03:57.595 SO libspdk_ublk.so.3.0 00:03:57.595 CC lib/nvmf/nvmf.o 00:03:57.595 SYMLINK libspdk_ublk.so 00:03:57.595 CC lib/scsi/scsi_rpc.o 00:03:57.595 CC lib/scsi/task.o 00:03:57.854 CC lib/ftl/ftl_writer.o 00:03:57.854 CC lib/nvmf/nvmf_rpc.o 00:03:57.854 CC lib/nvmf/transport.o 00:03:57.854 CC lib/ftl/ftl_rq.o 00:03:57.854 LIB libspdk_scsi.a 00:03:57.854 SO libspdk_scsi.so.9.0 00:03:57.854 CC lib/ftl/ftl_reloc.o 00:03:58.158 SYMLINK libspdk_scsi.so 00:03:58.158 CC lib/nvmf/tcp.o 00:03:58.158 CC lib/nvmf/stubs.o 00:03:58.158 CC lib/nvmf/mdns_server.o 00:03:58.158 CC lib/nvmf/rdma.o 00:03:58.158 CC lib/ftl/ftl_l2p_cache.o 00:03:58.440 CC lib/nvmf/auth.o 00:03:58.441 CC lib/ftl/ftl_p2l.o 00:03:58.441 CC lib/ftl/ftl_p2l_log.o 00:03:58.441 CC lib/ftl/mngt/ftl_mngt.o 00:03:58.699 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:58.699 CC lib/iscsi/conn.o 00:03:58.699 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:58.699 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:58.699 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:58.958 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:58.958 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:58.958 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:58.958 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:58.958 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:58.958 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:58.958 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:58.958 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:59.217 CC lib/ftl/utils/ftl_conf.o 00:03:59.217 CC lib/ftl/utils/ftl_md.o 00:03:59.217 CC lib/ftl/utils/ftl_mempool.o 00:03:59.217 CC lib/ftl/utils/ftl_bitmap.o 00:03:59.217 CC lib/iscsi/init_grp.o 00:03:59.217 CC lib/ftl/utils/ftl_property.o 00:03:59.217 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:59.217 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:59.476 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:59.476 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:59.476 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:59.476 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:59.476 CC lib/iscsi/iscsi.o 00:03:59.476 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:59.476 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:59.476 CC lib/iscsi/param.o 00:03:59.476 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:59.476 CC lib/iscsi/portal_grp.o 00:03:59.736 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:59.736 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:59.736 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:59.736 CC lib/vhost/vhost.o 00:03:59.736 CC lib/vhost/vhost_rpc.o 00:03:59.736 CC lib/vhost/vhost_scsi.o 00:03:59.736 CC lib/vhost/vhost_blk.o 00:03:59.736 CC lib/vhost/rte_vhost_user.o 00:03:59.736 CC lib/iscsi/tgt_node.o 00:03:59.994 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:59.994 CC lib/ftl/base/ftl_base_dev.o 00:03:59.994 LIB libspdk_nvmf.a 00:03:59.994 SO libspdk_nvmf.so.20.0 00:03:59.994 CC lib/ftl/base/ftl_base_bdev.o 00:03:59.994 CC lib/iscsi/iscsi_subsystem.o 00:04:00.253 CC lib/ftl/ftl_trace.o 00:04:00.253 CC lib/iscsi/iscsi_rpc.o 00:04:00.253 SYMLINK libspdk_nvmf.so 00:04:00.253 CC lib/iscsi/task.o 00:04:00.253 LIB libspdk_ftl.a 00:04:00.511 SO libspdk_ftl.so.9.0 00:04:00.511 LIB libspdk_iscsi.a 00:04:00.511 SO libspdk_iscsi.so.8.0 00:04:00.770 LIB libspdk_vhost.a 00:04:00.770 SO libspdk_vhost.so.8.0 00:04:00.770 SYMLINK libspdk_ftl.so 00:04:00.770 SYMLINK libspdk_iscsi.so 00:04:00.770 SYMLINK libspdk_vhost.so 00:04:01.028 CC module/env_dpdk/env_dpdk_rpc.o 00:04:01.028 CC module/keyring/file/keyring.o 00:04:01.028 CC module/accel/ioat/accel_ioat.o 00:04:01.028 CC module/blob/bdev/blob_bdev.o 00:04:01.028 CC module/keyring/linux/keyring.o 00:04:01.028 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:01.028 CC module/fsdev/aio/fsdev_aio.o 00:04:01.028 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:01.028 CC module/sock/posix/posix.o 00:04:01.028 CC module/accel/error/accel_error.o 00:04:01.286 LIB libspdk_env_dpdk_rpc.a 00:04:01.286 SO libspdk_env_dpdk_rpc.so.6.0 00:04:01.286 CC module/keyring/file/keyring_rpc.o 00:04:01.286 CC module/keyring/linux/keyring_rpc.o 00:04:01.286 SYMLINK libspdk_env_dpdk_rpc.so 00:04:01.286 LIB libspdk_scheduler_dpdk_governor.a 00:04:01.286 LIB libspdk_scheduler_dynamic.a 00:04:01.286 CC module/accel/error/accel_error_rpc.o 00:04:01.286 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:01.286 CC module/accel/ioat/accel_ioat_rpc.o 00:04:01.286 SO libspdk_scheduler_dynamic.so.4.0 00:04:01.286 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:01.286 LIB libspdk_keyring_linux.a 00:04:01.286 SYMLINK libspdk_scheduler_dynamic.so 00:04:01.286 LIB libspdk_keyring_file.a 00:04:01.286 SO libspdk_keyring_linux.so.1.0 00:04:01.286 SO libspdk_keyring_file.so.2.0 00:04:01.286 LIB libspdk_accel_error.a 00:04:01.286 LIB libspdk_blob_bdev.a 00:04:01.544 CC module/accel/dsa/accel_dsa.o 00:04:01.544 SO libspdk_blob_bdev.so.12.0 00:04:01.544 SO libspdk_accel_error.so.2.0 00:04:01.544 SYMLINK libspdk_keyring_linux.so 00:04:01.544 LIB libspdk_accel_ioat.a 00:04:01.544 SYMLINK libspdk_keyring_file.so 00:04:01.544 CC module/accel/dsa/accel_dsa_rpc.o 00:04:01.544 SO libspdk_accel_ioat.so.6.0 00:04:01.544 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:01.544 SYMLINK libspdk_blob_bdev.so 00:04:01.544 CC module/fsdev/aio/linux_aio_mgr.o 00:04:01.544 SYMLINK libspdk_accel_error.so 00:04:01.545 SYMLINK libspdk_accel_ioat.so 00:04:01.545 CC module/accel/iaa/accel_iaa.o 00:04:01.545 CC module/scheduler/gscheduler/gscheduler.o 00:04:01.545 CC module/accel/iaa/accel_iaa_rpc.o 00:04:01.545 LIB libspdk_accel_dsa.a 00:04:01.802 SO libspdk_accel_dsa.so.5.0 00:04:01.802 LIB libspdk_scheduler_gscheduler.a 00:04:01.802 SO libspdk_scheduler_gscheduler.so.4.0 00:04:01.802 SYMLINK libspdk_accel_dsa.so 00:04:01.802 SYMLINK libspdk_scheduler_gscheduler.so 00:04:01.802 LIB libspdk_accel_iaa.a 00:04:01.802 CC module/bdev/delay/vbdev_delay.o 00:04:01.802 CC module/bdev/error/vbdev_error.o 00:04:01.802 SO libspdk_accel_iaa.so.3.0 00:04:01.802 CC module/blobfs/bdev/blobfs_bdev.o 00:04:01.802 CC module/bdev/gpt/gpt.o 00:04:01.802 LIB libspdk_fsdev_aio.a 00:04:01.802 SYMLINK libspdk_accel_iaa.so 00:04:01.802 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:01.802 SO libspdk_fsdev_aio.so.1.0 00:04:01.802 CC module/bdev/malloc/bdev_malloc.o 00:04:01.802 CC module/bdev/lvol/vbdev_lvol.o 00:04:01.802 CC module/bdev/null/bdev_null.o 00:04:01.802 LIB libspdk_sock_posix.a 00:04:01.802 SYMLINK libspdk_fsdev_aio.so 00:04:01.802 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:01.802 SO libspdk_sock_posix.so.6.0 00:04:02.060 CC module/bdev/gpt/vbdev_gpt.o 00:04:02.060 CC module/bdev/error/vbdev_error_rpc.o 00:04:02.060 SYMLINK libspdk_sock_posix.so 00:04:02.060 CC module/bdev/null/bdev_null_rpc.o 00:04:02.060 LIB libspdk_blobfs_bdev.a 00:04:02.060 SO libspdk_blobfs_bdev.so.6.0 00:04:02.060 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:02.060 SYMLINK libspdk_blobfs_bdev.so 00:04:02.060 LIB libspdk_bdev_error.a 00:04:02.060 LIB libspdk_bdev_null.a 00:04:02.060 SO libspdk_bdev_error.so.6.0 00:04:02.060 CC module/bdev/nvme/bdev_nvme.o 00:04:02.060 LIB libspdk_bdev_gpt.a 00:04:02.060 SO libspdk_bdev_null.so.6.0 00:04:02.318 SO libspdk_bdev_gpt.so.6.0 00:04:02.318 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:02.318 SYMLINK libspdk_bdev_error.so 00:04:02.318 LIB libspdk_bdev_delay.a 00:04:02.318 CC module/bdev/passthru/vbdev_passthru.o 00:04:02.318 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:02.318 SYMLINK libspdk_bdev_null.so 00:04:02.318 SYMLINK libspdk_bdev_gpt.so 00:04:02.318 CC module/bdev/raid/bdev_raid.o 00:04:02.318 SO libspdk_bdev_delay.so.6.0 00:04:02.318 SYMLINK libspdk_bdev_delay.so 00:04:02.318 CC module/bdev/raid/bdev_raid_rpc.o 00:04:02.318 CC module/bdev/split/vbdev_split.o 00:04:02.318 LIB libspdk_bdev_lvol.a 00:04:02.318 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:02.318 CC module/bdev/xnvme/bdev_xnvme.o 00:04:02.318 LIB libspdk_bdev_malloc.a 00:04:02.318 SO libspdk_bdev_lvol.so.6.0 00:04:02.318 SO libspdk_bdev_malloc.so.6.0 00:04:02.576 SYMLINK libspdk_bdev_malloc.so 00:04:02.576 SYMLINK libspdk_bdev_lvol.so 00:04:02.576 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:02.576 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:02.576 CC module/bdev/split/vbdev_split_rpc.o 00:04:02.576 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:02.576 CC module/bdev/raid/bdev_raid_sb.o 00:04:02.576 CC module/bdev/nvme/nvme_rpc.o 00:04:02.576 CC module/bdev/nvme/bdev_mdns_client.o 00:04:02.576 LIB libspdk_bdev_split.a 00:04:02.576 LIB libspdk_bdev_xnvme.a 00:04:02.576 SO libspdk_bdev_split.so.6.0 00:04:02.576 SO libspdk_bdev_xnvme.so.3.0 00:04:02.576 LIB libspdk_bdev_passthru.a 00:04:02.576 LIB libspdk_bdev_zone_block.a 00:04:02.835 SO libspdk_bdev_passthru.so.6.0 00:04:02.835 SYMLINK libspdk_bdev_xnvme.so 00:04:02.835 SYMLINK libspdk_bdev_split.so 00:04:02.835 SO libspdk_bdev_zone_block.so.6.0 00:04:02.835 SYMLINK libspdk_bdev_passthru.so 00:04:02.835 CC module/bdev/nvme/vbdev_opal.o 00:04:02.835 CC module/bdev/raid/raid0.o 00:04:02.835 CC module/bdev/raid/raid1.o 00:04:02.835 SYMLINK libspdk_bdev_zone_block.so 00:04:02.835 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:02.835 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:02.835 CC module/bdev/aio/bdev_aio.o 00:04:02.835 CC module/bdev/ftl/bdev_ftl.o 00:04:03.093 CC module/bdev/raid/concat.o 00:04:03.093 CC module/bdev/aio/bdev_aio_rpc.o 00:04:03.093 CC module/bdev/iscsi/bdev_iscsi.o 00:04:03.093 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:03.093 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:03.093 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:03.093 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:03.093 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:03.093 LIB libspdk_bdev_raid.a 00:04:03.093 SO libspdk_bdev_raid.so.6.0 00:04:03.351 LIB libspdk_bdev_ftl.a 00:04:03.351 LIB libspdk_bdev_aio.a 00:04:03.351 SO libspdk_bdev_ftl.so.6.0 00:04:03.351 SYMLINK libspdk_bdev_raid.so 00:04:03.351 SO libspdk_bdev_aio.so.6.0 00:04:03.351 SYMLINK libspdk_bdev_ftl.so 00:04:03.351 SYMLINK libspdk_bdev_aio.so 00:04:03.351 LIB libspdk_bdev_iscsi.a 00:04:03.351 SO libspdk_bdev_iscsi.so.6.0 00:04:03.351 SYMLINK libspdk_bdev_iscsi.so 00:04:03.609 LIB libspdk_bdev_virtio.a 00:04:03.609 SO libspdk_bdev_virtio.so.6.0 00:04:03.609 SYMLINK libspdk_bdev_virtio.so 00:04:04.542 LIB libspdk_bdev_nvme.a 00:04:04.542 SO libspdk_bdev_nvme.so.7.1 00:04:04.542 SYMLINK libspdk_bdev_nvme.so 00:04:04.799 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:04.799 CC module/event/subsystems/sock/sock.o 00:04:04.799 CC module/event/subsystems/vmd/vmd.o 00:04:04.799 CC module/event/subsystems/iobuf/iobuf.o 00:04:04.799 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:04.799 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:04.799 CC module/event/subsystems/fsdev/fsdev.o 00:04:04.800 CC module/event/subsystems/scheduler/scheduler.o 00:04:04.800 CC module/event/subsystems/keyring/keyring.o 00:04:05.058 LIB libspdk_event_vhost_blk.a 00:04:05.058 LIB libspdk_event_vmd.a 00:04:05.058 LIB libspdk_event_keyring.a 00:04:05.058 LIB libspdk_event_sock.a 00:04:05.058 LIB libspdk_event_fsdev.a 00:04:05.058 SO libspdk_event_vhost_blk.so.3.0 00:04:05.058 LIB libspdk_event_scheduler.a 00:04:05.058 LIB libspdk_event_iobuf.a 00:04:05.058 SO libspdk_event_keyring.so.1.0 00:04:05.058 SO libspdk_event_sock.so.5.0 00:04:05.058 SO libspdk_event_vmd.so.6.0 00:04:05.058 SO libspdk_event_fsdev.so.1.0 00:04:05.058 SO libspdk_event_scheduler.so.4.0 00:04:05.058 SO libspdk_event_iobuf.so.3.0 00:04:05.058 SYMLINK libspdk_event_keyring.so 00:04:05.058 SYMLINK libspdk_event_vhost_blk.so 00:04:05.058 SYMLINK libspdk_event_sock.so 00:04:05.058 SYMLINK libspdk_event_fsdev.so 00:04:05.058 SYMLINK libspdk_event_vmd.so 00:04:05.058 SYMLINK libspdk_event_scheduler.so 00:04:05.058 SYMLINK libspdk_event_iobuf.so 00:04:05.316 CC module/event/subsystems/accel/accel.o 00:04:05.574 LIB libspdk_event_accel.a 00:04:05.574 SO libspdk_event_accel.so.6.0 00:04:05.574 SYMLINK libspdk_event_accel.so 00:04:05.832 CC module/event/subsystems/bdev/bdev.o 00:04:05.832 LIB libspdk_event_bdev.a 00:04:05.832 SO libspdk_event_bdev.so.6.0 00:04:06.090 SYMLINK libspdk_event_bdev.so 00:04:06.090 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:06.090 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:06.090 CC module/event/subsystems/scsi/scsi.o 00:04:06.090 CC module/event/subsystems/nbd/nbd.o 00:04:06.090 CC module/event/subsystems/ublk/ublk.o 00:04:06.348 LIB libspdk_event_nbd.a 00:04:06.348 LIB libspdk_event_ublk.a 00:04:06.348 LIB libspdk_event_scsi.a 00:04:06.348 SO libspdk_event_nbd.so.6.0 00:04:06.348 SO libspdk_event_ublk.so.3.0 00:04:06.348 SO libspdk_event_scsi.so.6.0 00:04:06.349 SYMLINK libspdk_event_nbd.so 00:04:06.349 SYMLINK libspdk_event_ublk.so 00:04:06.349 LIB libspdk_event_nvmf.a 00:04:06.349 SYMLINK libspdk_event_scsi.so 00:04:06.349 SO libspdk_event_nvmf.so.6.0 00:04:06.349 SYMLINK libspdk_event_nvmf.so 00:04:06.607 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:06.607 CC module/event/subsystems/iscsi/iscsi.o 00:04:06.866 LIB libspdk_event_vhost_scsi.a 00:04:06.866 LIB libspdk_event_iscsi.a 00:04:06.866 SO libspdk_event_vhost_scsi.so.3.0 00:04:06.866 SO libspdk_event_iscsi.so.6.0 00:04:06.866 SYMLINK libspdk_event_vhost_scsi.so 00:04:06.866 SYMLINK libspdk_event_iscsi.so 00:04:06.866 SO libspdk.so.6.0 00:04:06.866 SYMLINK libspdk.so 00:04:07.123 CC app/spdk_lspci/spdk_lspci.o 00:04:07.123 CC app/spdk_nvme_identify/identify.o 00:04:07.123 CC app/trace_record/trace_record.o 00:04:07.123 CC app/spdk_nvme_perf/perf.o 00:04:07.123 CXX app/trace/trace.o 00:04:07.123 CC app/iscsi_tgt/iscsi_tgt.o 00:04:07.123 CC app/spdk_tgt/spdk_tgt.o 00:04:07.123 CC app/nvmf_tgt/nvmf_main.o 00:04:07.123 CC test/thread/poller_perf/poller_perf.o 00:04:07.123 CC examples/util/zipf/zipf.o 00:04:07.123 LINK spdk_lspci 00:04:07.382 LINK zipf 00:04:07.382 LINK poller_perf 00:04:07.382 LINK nvmf_tgt 00:04:07.382 LINK spdk_tgt 00:04:07.382 LINK spdk_trace_record 00:04:07.382 LINK iscsi_tgt 00:04:07.382 CC app/spdk_nvme_discover/discovery_aer.o 00:04:07.382 LINK spdk_trace 00:04:07.639 CC examples/ioat/perf/perf.o 00:04:07.639 CC test/dma/test_dma/test_dma.o 00:04:07.639 CC app/spdk_top/spdk_top.o 00:04:07.639 CC examples/vmd/lsvmd/lsvmd.o 00:04:07.639 LINK spdk_nvme_discover 00:04:07.639 CC examples/idxd/perf/perf.o 00:04:07.639 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:07.639 CC test/app/bdev_svc/bdev_svc.o 00:04:07.639 LINK lsvmd 00:04:07.896 LINK ioat_perf 00:04:07.896 LINK interrupt_tgt 00:04:07.896 LINK spdk_nvme_identify 00:04:07.896 LINK bdev_svc 00:04:07.896 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:07.896 LINK spdk_nvme_perf 00:04:07.896 CC examples/vmd/led/led.o 00:04:07.896 LINK idxd_perf 00:04:07.896 CC examples/ioat/verify/verify.o 00:04:07.896 LINK test_dma 00:04:08.154 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:08.154 LINK led 00:04:08.154 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:08.154 CC test/app/histogram_perf/histogram_perf.o 00:04:08.154 CC test/app/jsoncat/jsoncat.o 00:04:08.154 LINK verify 00:04:08.154 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:08.154 TEST_HEADER include/spdk/accel.h 00:04:08.155 TEST_HEADER include/spdk/accel_module.h 00:04:08.155 TEST_HEADER include/spdk/assert.h 00:04:08.155 TEST_HEADER include/spdk/barrier.h 00:04:08.155 TEST_HEADER include/spdk/base64.h 00:04:08.155 TEST_HEADER include/spdk/bdev.h 00:04:08.155 TEST_HEADER include/spdk/bdev_module.h 00:04:08.155 TEST_HEADER include/spdk/bdev_zone.h 00:04:08.155 TEST_HEADER include/spdk/bit_array.h 00:04:08.155 TEST_HEADER include/spdk/bit_pool.h 00:04:08.155 TEST_HEADER include/spdk/blob_bdev.h 00:04:08.155 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:08.155 TEST_HEADER include/spdk/blobfs.h 00:04:08.155 TEST_HEADER include/spdk/blob.h 00:04:08.155 TEST_HEADER include/spdk/conf.h 00:04:08.155 TEST_HEADER include/spdk/config.h 00:04:08.155 TEST_HEADER include/spdk/cpuset.h 00:04:08.155 TEST_HEADER include/spdk/crc16.h 00:04:08.155 TEST_HEADER include/spdk/crc32.h 00:04:08.155 TEST_HEADER include/spdk/crc64.h 00:04:08.155 LINK nvme_fuzz 00:04:08.155 TEST_HEADER include/spdk/dif.h 00:04:08.155 TEST_HEADER include/spdk/dma.h 00:04:08.155 TEST_HEADER include/spdk/endian.h 00:04:08.155 TEST_HEADER include/spdk/env_dpdk.h 00:04:08.155 TEST_HEADER include/spdk/env.h 00:04:08.412 TEST_HEADER include/spdk/event.h 00:04:08.412 TEST_HEADER include/spdk/fd_group.h 00:04:08.412 LINK histogram_perf 00:04:08.412 TEST_HEADER include/spdk/fd.h 00:04:08.412 TEST_HEADER include/spdk/file.h 00:04:08.412 TEST_HEADER include/spdk/fsdev.h 00:04:08.412 TEST_HEADER include/spdk/fsdev_module.h 00:04:08.412 TEST_HEADER include/spdk/ftl.h 00:04:08.412 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:08.412 TEST_HEADER include/spdk/gpt_spec.h 00:04:08.412 TEST_HEADER include/spdk/hexlify.h 00:04:08.412 TEST_HEADER include/spdk/histogram_data.h 00:04:08.412 LINK jsoncat 00:04:08.412 TEST_HEADER include/spdk/idxd.h 00:04:08.412 TEST_HEADER include/spdk/idxd_spec.h 00:04:08.412 TEST_HEADER include/spdk/init.h 00:04:08.412 CC app/spdk_dd/spdk_dd.o 00:04:08.412 CC examples/thread/thread/thread_ex.o 00:04:08.412 TEST_HEADER include/spdk/ioat.h 00:04:08.412 TEST_HEADER include/spdk/ioat_spec.h 00:04:08.412 TEST_HEADER include/spdk/iscsi_spec.h 00:04:08.412 TEST_HEADER include/spdk/json.h 00:04:08.412 TEST_HEADER include/spdk/jsonrpc.h 00:04:08.412 TEST_HEADER include/spdk/keyring.h 00:04:08.412 TEST_HEADER include/spdk/keyring_module.h 00:04:08.412 TEST_HEADER include/spdk/likely.h 00:04:08.412 TEST_HEADER include/spdk/log.h 00:04:08.412 TEST_HEADER include/spdk/lvol.h 00:04:08.412 TEST_HEADER include/spdk/md5.h 00:04:08.412 TEST_HEADER include/spdk/memory.h 00:04:08.412 TEST_HEADER include/spdk/mmio.h 00:04:08.412 TEST_HEADER include/spdk/nbd.h 00:04:08.412 TEST_HEADER include/spdk/net.h 00:04:08.412 TEST_HEADER include/spdk/notify.h 00:04:08.412 TEST_HEADER include/spdk/nvme.h 00:04:08.412 TEST_HEADER include/spdk/nvme_intel.h 00:04:08.412 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:08.412 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:08.412 TEST_HEADER include/spdk/nvme_spec.h 00:04:08.412 TEST_HEADER include/spdk/nvme_zns.h 00:04:08.412 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:08.412 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:08.412 TEST_HEADER include/spdk/nvmf.h 00:04:08.412 TEST_HEADER include/spdk/nvmf_spec.h 00:04:08.412 TEST_HEADER include/spdk/nvmf_transport.h 00:04:08.412 TEST_HEADER include/spdk/opal.h 00:04:08.412 TEST_HEADER include/spdk/opal_spec.h 00:04:08.412 TEST_HEADER include/spdk/pci_ids.h 00:04:08.412 TEST_HEADER include/spdk/pipe.h 00:04:08.412 TEST_HEADER include/spdk/queue.h 00:04:08.412 TEST_HEADER include/spdk/reduce.h 00:04:08.412 TEST_HEADER include/spdk/rpc.h 00:04:08.412 TEST_HEADER include/spdk/scheduler.h 00:04:08.412 TEST_HEADER include/spdk/scsi.h 00:04:08.412 TEST_HEADER include/spdk/scsi_spec.h 00:04:08.412 TEST_HEADER include/spdk/sock.h 00:04:08.412 TEST_HEADER include/spdk/stdinc.h 00:04:08.412 TEST_HEADER include/spdk/string.h 00:04:08.412 TEST_HEADER include/spdk/thread.h 00:04:08.412 TEST_HEADER include/spdk/trace.h 00:04:08.412 TEST_HEADER include/spdk/trace_parser.h 00:04:08.412 TEST_HEADER include/spdk/tree.h 00:04:08.412 TEST_HEADER include/spdk/ublk.h 00:04:08.412 TEST_HEADER include/spdk/util.h 00:04:08.412 TEST_HEADER include/spdk/uuid.h 00:04:08.412 TEST_HEADER include/spdk/version.h 00:04:08.412 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:08.412 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:08.412 TEST_HEADER include/spdk/vhost.h 00:04:08.412 TEST_HEADER include/spdk/vmd.h 00:04:08.412 TEST_HEADER include/spdk/xor.h 00:04:08.412 TEST_HEADER include/spdk/zipf.h 00:04:08.412 CXX test/cpp_headers/accel.o 00:04:08.412 CC test/app/stub/stub.o 00:04:08.412 CC examples/sock/hello_world/hello_sock.o 00:04:08.670 CXX test/cpp_headers/accel_module.o 00:04:08.670 LINK thread 00:04:08.670 LINK spdk_top 00:04:08.670 CC app/fio/nvme/fio_plugin.o 00:04:08.670 LINK vhost_fuzz 00:04:08.670 LINK spdk_dd 00:04:08.670 LINK stub 00:04:08.670 CXX test/cpp_headers/assert.o 00:04:08.670 CC test/env/mem_callbacks/mem_callbacks.o 00:04:08.670 LINK hello_sock 00:04:08.670 CC test/env/vtophys/vtophys.o 00:04:08.670 CXX test/cpp_headers/barrier.o 00:04:08.928 CC app/fio/bdev/fio_plugin.o 00:04:08.928 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:08.928 CC app/vhost/vhost.o 00:04:08.928 CC test/event/event_perf/event_perf.o 00:04:08.928 CXX test/cpp_headers/base64.o 00:04:08.928 LINK vtophys 00:04:08.928 LINK env_dpdk_post_init 00:04:08.928 CXX test/cpp_headers/bdev.o 00:04:08.928 LINK event_perf 00:04:08.928 LINK spdk_nvme 00:04:08.928 CC examples/accel/perf/accel_perf.o 00:04:08.928 LINK vhost 00:04:09.185 CXX test/cpp_headers/bdev_module.o 00:04:09.185 CC test/event/reactor/reactor.o 00:04:09.185 LINK mem_callbacks 00:04:09.185 CC examples/blob/cli/blobcli.o 00:04:09.185 CC examples/blob/hello_world/hello_blob.o 00:04:09.185 CC test/event/reactor_perf/reactor_perf.o 00:04:09.185 CC examples/nvme/hello_world/hello_world.o 00:04:09.185 LINK reactor 00:04:09.443 LINK spdk_bdev 00:04:09.443 CXX test/cpp_headers/bdev_zone.o 00:04:09.443 CC test/env/memory/memory_ut.o 00:04:09.443 LINK hello_blob 00:04:09.443 LINK reactor_perf 00:04:09.443 LINK accel_perf 00:04:09.443 CC examples/nvme/reconnect/reconnect.o 00:04:09.443 LINK hello_world 00:04:09.443 CXX test/cpp_headers/bit_array.o 00:04:09.443 CC test/event/app_repeat/app_repeat.o 00:04:09.443 CXX test/cpp_headers/bit_pool.o 00:04:09.443 CXX test/cpp_headers/blob_bdev.o 00:04:09.700 CXX test/cpp_headers/blobfs_bdev.o 00:04:09.700 LINK app_repeat 00:04:09.700 LINK blobcli 00:04:09.700 CC test/event/scheduler/scheduler.o 00:04:09.700 LINK iscsi_fuzz 00:04:09.700 LINK reconnect 00:04:09.700 CXX test/cpp_headers/blobfs.o 00:04:09.700 CXX test/cpp_headers/blob.o 00:04:09.700 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:09.700 CC test/env/pci/pci_ut.o 00:04:09.958 CC examples/bdev/hello_world/hello_bdev.o 00:04:09.958 LINK scheduler 00:04:09.958 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:09.958 CC examples/nvme/arbitration/arbitration.o 00:04:09.958 CXX test/cpp_headers/conf.o 00:04:09.958 CC examples/nvme/hotplug/hotplug.o 00:04:09.958 CC test/nvme/aer/aer.o 00:04:09.958 LINK hello_fsdev 00:04:09.958 LINK hello_bdev 00:04:09.958 CXX test/cpp_headers/config.o 00:04:10.217 CXX test/cpp_headers/cpuset.o 00:04:10.217 CC test/rpc_client/rpc_client_test.o 00:04:10.217 LINK hotplug 00:04:10.217 LINK arbitration 00:04:10.217 LINK pci_ut 00:04:10.217 LINK memory_ut 00:04:10.217 CXX test/cpp_headers/crc16.o 00:04:10.217 LINK aer 00:04:10.217 CC examples/bdev/bdevperf/bdevperf.o 00:04:10.217 LINK rpc_client_test 00:04:10.217 LINK nvme_manage 00:04:10.475 CC test/accel/dif/dif.o 00:04:10.475 CXX test/cpp_headers/crc32.o 00:04:10.475 CC test/nvme/sgl/sgl.o 00:04:10.475 CC test/nvme/reset/reset.o 00:04:10.475 CC test/blobfs/mkfs/mkfs.o 00:04:10.475 CC test/nvme/e2edp/nvme_dp.o 00:04:10.475 CC test/nvme/overhead/overhead.o 00:04:10.475 CXX test/cpp_headers/crc64.o 00:04:10.475 CC test/lvol/esnap/esnap.o 00:04:10.475 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:10.733 LINK mkfs 00:04:10.733 CXX test/cpp_headers/dif.o 00:04:10.733 LINK reset 00:04:10.733 LINK cmb_copy 00:04:10.733 LINK sgl 00:04:10.733 LINK nvme_dp 00:04:10.733 CXX test/cpp_headers/dma.o 00:04:10.733 CXX test/cpp_headers/endian.o 00:04:10.733 LINK overhead 00:04:10.733 CC test/nvme/err_injection/err_injection.o 00:04:10.733 CXX test/cpp_headers/env_dpdk.o 00:04:10.733 CC examples/nvme/abort/abort.o 00:04:10.991 CC test/nvme/startup/startup.o 00:04:10.991 CXX test/cpp_headers/env.o 00:04:10.991 CC test/nvme/reserve/reserve.o 00:04:10.991 CC test/nvme/simple_copy/simple_copy.o 00:04:10.991 LINK bdevperf 00:04:10.991 LINK err_injection 00:04:10.991 LINK startup 00:04:10.991 CC test/nvme/connect_stress/connect_stress.o 00:04:10.991 LINK dif 00:04:10.991 CXX test/cpp_headers/event.o 00:04:11.248 LINK simple_copy 00:04:11.248 LINK reserve 00:04:11.248 CC test/nvme/boot_partition/boot_partition.o 00:04:11.248 CXX test/cpp_headers/fd_group.o 00:04:11.248 LINK abort 00:04:11.248 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:11.248 CC test/nvme/compliance/nvme_compliance.o 00:04:11.248 LINK connect_stress 00:04:11.248 CC test/nvme/fused_ordering/fused_ordering.o 00:04:11.248 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:11.248 CC test/nvme/fdp/fdp.o 00:04:11.248 CXX test/cpp_headers/fd.o 00:04:11.248 LINK boot_partition 00:04:11.248 CXX test/cpp_headers/file.o 00:04:11.506 LINK pmr_persistence 00:04:11.506 CC test/nvme/cuse/cuse.o 00:04:11.506 LINK fused_ordering 00:04:11.506 CXX test/cpp_headers/fsdev.o 00:04:11.506 LINK doorbell_aers 00:04:11.506 CXX test/cpp_headers/fsdev_module.o 00:04:11.506 CXX test/cpp_headers/ftl.o 00:04:11.506 LINK nvme_compliance 00:04:11.506 LINK fdp 00:04:11.506 CXX test/cpp_headers/fuse_dispatcher.o 00:04:11.506 CXX test/cpp_headers/gpt_spec.o 00:04:11.506 CC test/bdev/bdevio/bdevio.o 00:04:11.765 CXX test/cpp_headers/hexlify.o 00:04:11.765 CXX test/cpp_headers/histogram_data.o 00:04:11.765 CXX test/cpp_headers/idxd.o 00:04:11.765 CC examples/nvmf/nvmf/nvmf.o 00:04:11.765 CXX test/cpp_headers/idxd_spec.o 00:04:11.765 CXX test/cpp_headers/init.o 00:04:11.765 CXX test/cpp_headers/ioat.o 00:04:11.765 CXX test/cpp_headers/ioat_spec.o 00:04:11.765 CXX test/cpp_headers/iscsi_spec.o 00:04:11.765 CXX test/cpp_headers/json.o 00:04:11.765 CXX test/cpp_headers/jsonrpc.o 00:04:11.765 CXX test/cpp_headers/keyring.o 00:04:11.765 CXX test/cpp_headers/keyring_module.o 00:04:12.023 CXX test/cpp_headers/likely.o 00:04:12.023 CXX test/cpp_headers/log.o 00:04:12.023 CXX test/cpp_headers/lvol.o 00:04:12.023 CXX test/cpp_headers/md5.o 00:04:12.023 LINK nvmf 00:04:12.023 CXX test/cpp_headers/memory.o 00:04:12.023 CXX test/cpp_headers/mmio.o 00:04:12.023 LINK bdevio 00:04:12.023 CXX test/cpp_headers/nbd.o 00:04:12.023 CXX test/cpp_headers/net.o 00:04:12.023 CXX test/cpp_headers/notify.o 00:04:12.023 CXX test/cpp_headers/nvme.o 00:04:12.023 CXX test/cpp_headers/nvme_intel.o 00:04:12.023 CXX test/cpp_headers/nvme_ocssd.o 00:04:12.023 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:12.023 CXX test/cpp_headers/nvme_spec.o 00:04:12.413 CXX test/cpp_headers/nvme_zns.o 00:04:12.413 CXX test/cpp_headers/nvmf_cmd.o 00:04:12.413 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:12.413 CXX test/cpp_headers/nvmf.o 00:04:12.413 CXX test/cpp_headers/nvmf_spec.o 00:04:12.413 CXX test/cpp_headers/nvmf_transport.o 00:04:12.413 CXX test/cpp_headers/opal.o 00:04:12.413 CXX test/cpp_headers/opal_spec.o 00:04:12.413 CXX test/cpp_headers/pci_ids.o 00:04:12.413 CXX test/cpp_headers/pipe.o 00:04:12.413 CXX test/cpp_headers/queue.o 00:04:12.413 CXX test/cpp_headers/reduce.o 00:04:12.413 CXX test/cpp_headers/rpc.o 00:04:12.413 CXX test/cpp_headers/scheduler.o 00:04:12.413 CXX test/cpp_headers/scsi.o 00:04:12.413 CXX test/cpp_headers/scsi_spec.o 00:04:12.413 CXX test/cpp_headers/sock.o 00:04:12.413 CXX test/cpp_headers/stdinc.o 00:04:12.413 CXX test/cpp_headers/string.o 00:04:12.413 CXX test/cpp_headers/thread.o 00:04:12.413 CXX test/cpp_headers/trace.o 00:04:12.413 CXX test/cpp_headers/trace_parser.o 00:04:12.413 CXX test/cpp_headers/tree.o 00:04:12.413 CXX test/cpp_headers/ublk.o 00:04:12.672 CXX test/cpp_headers/util.o 00:04:12.672 CXX test/cpp_headers/uuid.o 00:04:12.672 CXX test/cpp_headers/version.o 00:04:12.672 CXX test/cpp_headers/vfio_user_pci.o 00:04:12.672 CXX test/cpp_headers/vfio_user_spec.o 00:04:12.672 CXX test/cpp_headers/vhost.o 00:04:12.672 CXX test/cpp_headers/vmd.o 00:04:12.672 LINK cuse 00:04:12.672 CXX test/cpp_headers/xor.o 00:04:12.672 CXX test/cpp_headers/zipf.o 00:04:15.203 LINK esnap 00:04:15.462 00:04:15.462 real 1m3.293s 00:04:15.462 user 5m9.283s 00:04:15.462 sys 0m54.704s 00:04:15.462 09:21:43 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:15.462 09:21:43 make -- common/autotest_common.sh@10 -- $ set +x 00:04:15.462 ************************************ 00:04:15.462 END TEST make 00:04:15.462 ************************************ 00:04:15.462 09:21:43 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:15.462 09:21:43 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:15.462 09:21:43 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:15.462 09:21:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:15.462 09:21:43 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:15.462 09:21:43 -- pm/common@44 -- $ pid=5817 00:04:15.462 09:21:43 -- pm/common@50 -- $ kill -TERM 5817 00:04:15.462 09:21:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:15.462 09:21:43 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:15.462 09:21:43 -- pm/common@44 -- $ pid=5819 00:04:15.462 09:21:43 -- pm/common@50 -- $ kill -TERM 5819 00:04:15.462 09:21:43 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:15.462 09:21:43 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:15.720 09:21:43 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:15.720 09:21:43 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:15.720 09:21:43 -- common/autotest_common.sh@1693 -- # lcov --version 00:04:15.720 09:21:43 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:15.720 09:21:43 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:15.720 09:21:43 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:15.720 09:21:43 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:15.720 09:21:43 -- scripts/common.sh@336 -- # IFS=.-: 00:04:15.720 09:21:43 -- scripts/common.sh@336 -- # read -ra ver1 00:04:15.720 09:21:43 -- scripts/common.sh@337 -- # IFS=.-: 00:04:15.720 09:21:43 -- scripts/common.sh@337 -- # read -ra ver2 00:04:15.720 09:21:43 -- scripts/common.sh@338 -- # local 'op=<' 00:04:15.720 09:21:43 -- scripts/common.sh@340 -- # ver1_l=2 00:04:15.720 09:21:43 -- scripts/common.sh@341 -- # ver2_l=1 00:04:15.720 09:21:43 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:15.720 09:21:43 -- scripts/common.sh@344 -- # case "$op" in 00:04:15.720 09:21:43 -- scripts/common.sh@345 -- # : 1 00:04:15.720 09:21:43 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:15.720 09:21:43 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:15.720 09:21:43 -- scripts/common.sh@365 -- # decimal 1 00:04:15.720 09:21:43 -- scripts/common.sh@353 -- # local d=1 00:04:15.720 09:21:43 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:15.720 09:21:43 -- scripts/common.sh@355 -- # echo 1 00:04:15.720 09:21:43 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:15.720 09:21:43 -- scripts/common.sh@366 -- # decimal 2 00:04:15.720 09:21:43 -- scripts/common.sh@353 -- # local d=2 00:04:15.720 09:21:43 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:15.720 09:21:43 -- scripts/common.sh@355 -- # echo 2 00:04:15.720 09:21:43 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:15.720 09:21:43 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:15.720 09:21:43 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:15.720 09:21:43 -- scripts/common.sh@368 -- # return 0 00:04:15.720 09:21:43 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:15.720 09:21:43 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:15.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.720 --rc genhtml_branch_coverage=1 00:04:15.720 --rc genhtml_function_coverage=1 00:04:15.720 --rc genhtml_legend=1 00:04:15.720 --rc geninfo_all_blocks=1 00:04:15.721 --rc geninfo_unexecuted_blocks=1 00:04:15.721 00:04:15.721 ' 00:04:15.721 09:21:43 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:15.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.721 --rc genhtml_branch_coverage=1 00:04:15.721 --rc genhtml_function_coverage=1 00:04:15.721 --rc genhtml_legend=1 00:04:15.721 --rc geninfo_all_blocks=1 00:04:15.721 --rc geninfo_unexecuted_blocks=1 00:04:15.721 00:04:15.721 ' 00:04:15.721 09:21:43 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:15.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.721 --rc genhtml_branch_coverage=1 00:04:15.721 --rc genhtml_function_coverage=1 00:04:15.721 --rc genhtml_legend=1 00:04:15.721 --rc geninfo_all_blocks=1 00:04:15.721 --rc geninfo_unexecuted_blocks=1 00:04:15.721 00:04:15.721 ' 00:04:15.721 09:21:43 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:15.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.721 --rc genhtml_branch_coverage=1 00:04:15.721 --rc genhtml_function_coverage=1 00:04:15.721 --rc genhtml_legend=1 00:04:15.721 --rc geninfo_all_blocks=1 00:04:15.721 --rc geninfo_unexecuted_blocks=1 00:04:15.721 00:04:15.721 ' 00:04:15.721 09:21:43 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:15.721 09:21:43 -- nvmf/common.sh@7 -- # uname -s 00:04:15.721 09:21:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:15.721 09:21:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:15.721 09:21:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:15.721 09:21:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:15.721 09:21:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:15.721 09:21:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:15.721 09:21:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:15.721 09:21:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:15.721 09:21:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:15.721 09:21:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:15.721 09:21:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:67e40bf6-6f06-4caa-b4ab-dc6264607e2b 00:04:15.721 09:21:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=67e40bf6-6f06-4caa-b4ab-dc6264607e2b 00:04:15.721 09:21:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:15.721 09:21:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:15.721 09:21:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:15.721 09:21:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:15.721 09:21:43 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:15.721 09:21:43 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:15.721 09:21:43 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:15.721 09:21:43 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:15.721 09:21:43 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:15.721 09:21:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:15.721 09:21:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:15.721 09:21:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:15.721 09:21:43 -- paths/export.sh@5 -- # export PATH 00:04:15.721 09:21:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:15.721 09:21:43 -- nvmf/common.sh@51 -- # : 0 00:04:15.721 09:21:43 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:15.721 09:21:43 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:15.721 09:21:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:15.721 09:21:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:15.721 09:21:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:15.721 09:21:43 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:15.721 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:15.721 09:21:43 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:15.721 09:21:43 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:15.721 09:21:43 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:15.721 09:21:43 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:15.721 09:21:43 -- spdk/autotest.sh@32 -- # uname -s 00:04:15.721 09:21:43 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:15.721 09:21:43 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:15.721 09:21:43 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:15.721 09:21:43 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:15.721 09:21:43 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:15.721 09:21:43 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:15.721 09:21:43 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:15.721 09:21:43 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:15.721 09:21:43 -- spdk/autotest.sh@48 -- # udevadm_pid=68087 00:04:15.721 09:21:43 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:15.721 09:21:43 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:15.721 09:21:43 -- pm/common@17 -- # local monitor 00:04:15.721 09:21:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:15.721 09:21:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:15.721 09:21:43 -- pm/common@25 -- # sleep 1 00:04:15.721 09:21:43 -- pm/common@21 -- # date +%s 00:04:15.721 09:21:43 -- pm/common@21 -- # date +%s 00:04:15.721 09:21:43 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732872103 00:04:15.721 09:21:43 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732872103 00:04:15.721 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732872103_collect-cpu-load.pm.log 00:04:15.721 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732872103_collect-vmstat.pm.log 00:04:16.655 09:21:44 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:16.655 09:21:44 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:16.655 09:21:44 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:16.655 09:21:44 -- common/autotest_common.sh@10 -- # set +x 00:04:16.655 09:21:44 -- spdk/autotest.sh@59 -- # create_test_list 00:04:16.655 09:21:44 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:16.655 09:21:44 -- common/autotest_common.sh@10 -- # set +x 00:04:16.912 09:21:44 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:16.912 09:21:44 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:16.912 09:21:44 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:16.912 09:21:44 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:16.912 09:21:44 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:16.912 09:21:44 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:16.912 09:21:44 -- common/autotest_common.sh@1457 -- # uname 00:04:16.912 09:21:44 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:16.912 09:21:44 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:16.912 09:21:44 -- common/autotest_common.sh@1477 -- # uname 00:04:16.912 09:21:44 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:16.912 09:21:44 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:16.912 09:21:44 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:16.912 lcov: LCOV version 1.15 00:04:16.912 09:21:44 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:31.798 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:31.798 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:46.687 09:22:12 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:46.687 09:22:12 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:46.687 09:22:12 -- common/autotest_common.sh@10 -- # set +x 00:04:46.687 09:22:12 -- spdk/autotest.sh@78 -- # rm -f 00:04:46.687 09:22:12 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:46.687 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:46.687 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:46.687 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:46.687 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:46.687 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:46.687 09:22:13 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:46.687 09:22:13 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:46.687 09:22:13 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:46.687 09:22:13 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:46.687 09:22:13 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:46.687 09:22:13 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:46.687 09:22:13 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:46.687 09:22:13 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:46.687 09:22:13 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:46.687 09:22:13 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:46.687 09:22:13 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:04:46.687 09:22:13 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:46.687 09:22:13 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:46.687 09:22:13 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:46.687 09:22:13 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:46.687 09:22:13 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:04:46.687 09:22:13 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:04:46.687 09:22:13 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:46.687 09:22:13 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:46.687 09:22:13 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:46.687 09:22:13 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:04:46.687 09:22:13 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:04:46.687 09:22:13 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:46.687 09:22:13 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:46.687 09:22:13 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:46.687 09:22:13 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:04:46.687 09:22:13 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:04:46.687 09:22:13 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:46.687 09:22:13 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:46.687 09:22:13 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:46.687 09:22:13 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:04:46.687 09:22:13 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:04:46.687 09:22:13 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:46.687 09:22:13 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:46.687 09:22:13 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:46.687 09:22:13 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:04:46.687 09:22:13 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:04:46.687 09:22:13 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:46.687 09:22:13 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:46.687 09:22:13 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:46.687 09:22:13 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:46.687 09:22:13 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:46.687 09:22:13 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:46.687 09:22:13 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:46.687 09:22:13 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:46.687 No valid GPT data, bailing 00:04:46.687 09:22:13 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:46.687 09:22:13 -- scripts/common.sh@394 -- # pt= 00:04:46.687 09:22:13 -- scripts/common.sh@395 -- # return 1 00:04:46.687 09:22:13 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:46.687 1+0 records in 00:04:46.687 1+0 records out 00:04:46.687 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0334101 s, 31.4 MB/s 00:04:46.687 09:22:13 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:46.687 09:22:13 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:46.687 09:22:13 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:46.687 09:22:13 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:46.687 09:22:13 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:46.687 No valid GPT data, bailing 00:04:46.687 09:22:13 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:46.687 09:22:13 -- scripts/common.sh@394 -- # pt= 00:04:46.687 09:22:13 -- scripts/common.sh@395 -- # return 1 00:04:46.687 09:22:13 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:46.687 1+0 records in 00:04:46.687 1+0 records out 00:04:46.687 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00611119 s, 172 MB/s 00:04:46.687 09:22:13 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:46.687 09:22:13 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:46.687 09:22:13 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:04:46.687 09:22:13 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:04:46.687 09:22:13 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:46.687 No valid GPT data, bailing 00:04:46.687 09:22:13 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:46.687 09:22:13 -- scripts/common.sh@394 -- # pt= 00:04:46.687 09:22:13 -- scripts/common.sh@395 -- # return 1 00:04:46.687 09:22:13 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:46.687 1+0 records in 00:04:46.687 1+0 records out 00:04:46.687 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00547116 s, 192 MB/s 00:04:46.687 09:22:13 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:46.687 09:22:13 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:46.687 09:22:13 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:04:46.687 09:22:13 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:04:46.687 09:22:13 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:04:46.687 No valid GPT data, bailing 00:04:46.687 09:22:13 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:46.687 09:22:13 -- scripts/common.sh@394 -- # pt= 00:04:46.687 09:22:13 -- scripts/common.sh@395 -- # return 1 00:04:46.687 09:22:13 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:04:46.687 1+0 records in 00:04:46.687 1+0 records out 00:04:46.687 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00624187 s, 168 MB/s 00:04:46.687 09:22:13 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:46.687 09:22:13 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:46.688 09:22:13 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:04:46.688 09:22:13 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:04:46.688 09:22:13 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:04:46.688 No valid GPT data, bailing 00:04:46.688 09:22:13 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:46.688 09:22:13 -- scripts/common.sh@394 -- # pt= 00:04:46.688 09:22:13 -- scripts/common.sh@395 -- # return 1 00:04:46.688 09:22:13 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:04:46.688 1+0 records in 00:04:46.688 1+0 records out 00:04:46.688 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00551648 s, 190 MB/s 00:04:46.688 09:22:13 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:46.688 09:22:13 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:46.688 09:22:13 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:04:46.688 09:22:13 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:04:46.688 09:22:13 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:46.688 No valid GPT data, bailing 00:04:46.688 09:22:13 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:46.688 09:22:13 -- scripts/common.sh@394 -- # pt= 00:04:46.688 09:22:13 -- scripts/common.sh@395 -- # return 1 00:04:46.688 09:22:13 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:46.688 1+0 records in 00:04:46.688 1+0 records out 00:04:46.688 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00498507 s, 210 MB/s 00:04:46.688 09:22:13 -- spdk/autotest.sh@105 -- # sync 00:04:46.688 09:22:14 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:46.688 09:22:14 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:46.688 09:22:14 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:48.604 09:22:16 -- spdk/autotest.sh@111 -- # uname -s 00:04:48.604 09:22:16 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:48.604 09:22:16 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:48.604 09:22:16 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:49.179 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:49.440 Hugepages 00:04:49.440 node hugesize free / total 00:04:49.440 node0 1048576kB 0 / 0 00:04:49.440 node0 2048kB 0 / 0 00:04:49.440 00:04:49.440 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:49.440 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:49.702 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:49.702 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:49.702 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:04:49.702 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:49.702 09:22:17 -- spdk/autotest.sh@117 -- # uname -s 00:04:49.702 09:22:17 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:49.702 09:22:17 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:49.702 09:22:17 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:50.273 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:50.843 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:50.843 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:50.843 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:50.843 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:50.843 09:22:18 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:52.239 09:22:19 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:52.239 09:22:19 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:52.239 09:22:19 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:52.239 09:22:19 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:52.239 09:22:19 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:52.239 09:22:19 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:52.239 09:22:19 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:52.239 09:22:19 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:52.239 09:22:19 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:52.239 09:22:19 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:52.239 09:22:19 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:52.239 09:22:19 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:52.239 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:52.499 Waiting for block devices as requested 00:04:52.499 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:52.499 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:52.760 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:52.760 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:58.053 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:58.053 09:22:25 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:58.053 09:22:25 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:58.053 09:22:25 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:58.053 09:22:25 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:58.053 09:22:25 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:58.053 09:22:25 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:58.053 09:22:25 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:58.053 09:22:25 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:58.053 09:22:25 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:58.053 09:22:25 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:58.053 09:22:25 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:58.053 09:22:25 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:58.053 09:22:25 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:58.053 09:22:25 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:58.053 09:22:25 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:58.053 09:22:25 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:58.053 09:22:25 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:58.053 09:22:25 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:58.053 09:22:25 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:58.053 09:22:25 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:58.053 09:22:25 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:58.053 09:22:25 -- common/autotest_common.sh@1543 -- # continue 00:04:58.053 09:22:25 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:58.053 09:22:25 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:58.053 09:22:25 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:58.053 09:22:25 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:58.053 09:22:25 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:58.053 09:22:25 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:58.053 09:22:25 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:58.053 09:22:25 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:58.053 09:22:25 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:58.053 09:22:25 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:58.053 09:22:25 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:58.053 09:22:25 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:58.053 09:22:25 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:58.053 09:22:25 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:58.053 09:22:25 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:58.053 09:22:25 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:58.053 09:22:25 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:58.053 09:22:25 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:58.053 09:22:25 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:58.053 09:22:25 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:58.053 09:22:25 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:58.053 09:22:25 -- common/autotest_common.sh@1543 -- # continue 00:04:58.053 09:22:25 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:58.053 09:22:25 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:58.053 09:22:25 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:04:58.053 09:22:25 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:58.053 09:22:25 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:58.053 09:22:25 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:58.053 09:22:25 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:58.053 09:22:25 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:04:58.053 09:22:25 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:04:58.053 09:22:25 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:04:58.053 09:22:25 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:04:58.053 09:22:25 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:58.053 09:22:25 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:58.053 09:22:25 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:58.053 09:22:25 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:58.053 09:22:25 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:58.053 09:22:25 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:04:58.053 09:22:25 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:58.053 09:22:25 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:58.053 09:22:25 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:58.053 09:22:25 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:58.053 09:22:25 -- common/autotest_common.sh@1543 -- # continue 00:04:58.053 09:22:25 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:58.053 09:22:25 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:58.053 09:22:25 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:58.053 09:22:25 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:04:58.053 09:22:25 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:58.053 09:22:25 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:58.053 09:22:25 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:58.053 09:22:25 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:04:58.053 09:22:25 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:04:58.053 09:22:25 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:04:58.053 09:22:25 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:04:58.053 09:22:25 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:58.053 09:22:25 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:58.053 09:22:25 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:58.053 09:22:25 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:58.053 09:22:25 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:58.053 09:22:25 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:04:58.053 09:22:25 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:58.053 09:22:25 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:58.053 09:22:25 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:58.053 09:22:25 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:58.053 09:22:25 -- common/autotest_common.sh@1543 -- # continue 00:04:58.053 09:22:25 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:58.053 09:22:25 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:58.053 09:22:25 -- common/autotest_common.sh@10 -- # set +x 00:04:58.053 09:22:25 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:58.053 09:22:25 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:58.053 09:22:25 -- common/autotest_common.sh@10 -- # set +x 00:04:58.053 09:22:25 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:58.642 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:59.213 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:59.213 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:59.213 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:59.213 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:59.213 09:22:26 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:59.213 09:22:26 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:59.213 09:22:26 -- common/autotest_common.sh@10 -- # set +x 00:04:59.475 09:22:26 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:59.475 09:22:26 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:59.475 09:22:26 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:59.475 09:22:26 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:59.475 09:22:26 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:59.475 09:22:26 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:59.475 09:22:26 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:59.475 09:22:26 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:59.475 09:22:26 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:59.475 09:22:26 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:59.475 09:22:26 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:59.475 09:22:26 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:59.475 09:22:26 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:59.475 09:22:27 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:59.475 09:22:27 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:59.475 09:22:27 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:59.475 09:22:27 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:59.475 09:22:27 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:59.475 09:22:27 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:59.475 09:22:27 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:59.475 09:22:27 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:59.475 09:22:27 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:59.475 09:22:27 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:59.475 09:22:27 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:59.475 09:22:27 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:59.475 09:22:27 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:59.475 09:22:27 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:59.475 09:22:27 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:59.475 09:22:27 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:59.475 09:22:27 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:59.475 09:22:27 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:59.475 09:22:27 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:59.475 09:22:27 -- common/autotest_common.sh@1572 -- # return 0 00:04:59.475 09:22:27 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:59.475 09:22:27 -- common/autotest_common.sh@1580 -- # return 0 00:04:59.475 09:22:27 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:59.475 09:22:27 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:59.475 09:22:27 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:59.475 09:22:27 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:59.475 09:22:27 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:59.475 09:22:27 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:59.475 09:22:27 -- common/autotest_common.sh@10 -- # set +x 00:04:59.475 09:22:27 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:59.475 09:22:27 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:59.475 09:22:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.475 09:22:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.475 09:22:27 -- common/autotest_common.sh@10 -- # set +x 00:04:59.475 ************************************ 00:04:59.475 START TEST env 00:04:59.475 ************************************ 00:04:59.475 09:22:27 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:59.475 * Looking for test storage... 00:04:59.475 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:59.475 09:22:27 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:59.475 09:22:27 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:59.475 09:22:27 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:59.736 09:22:27 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:59.736 09:22:27 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.736 09:22:27 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.736 09:22:27 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.736 09:22:27 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.736 09:22:27 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.736 09:22:27 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.736 09:22:27 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.736 09:22:27 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.736 09:22:27 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.736 09:22:27 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.736 09:22:27 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.736 09:22:27 env -- scripts/common.sh@344 -- # case "$op" in 00:04:59.736 09:22:27 env -- scripts/common.sh@345 -- # : 1 00:04:59.736 09:22:27 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.736 09:22:27 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.736 09:22:27 env -- scripts/common.sh@365 -- # decimal 1 00:04:59.736 09:22:27 env -- scripts/common.sh@353 -- # local d=1 00:04:59.736 09:22:27 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.736 09:22:27 env -- scripts/common.sh@355 -- # echo 1 00:04:59.736 09:22:27 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.736 09:22:27 env -- scripts/common.sh@366 -- # decimal 2 00:04:59.736 09:22:27 env -- scripts/common.sh@353 -- # local d=2 00:04:59.736 09:22:27 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.736 09:22:27 env -- scripts/common.sh@355 -- # echo 2 00:04:59.736 09:22:27 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.736 09:22:27 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.736 09:22:27 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.736 09:22:27 env -- scripts/common.sh@368 -- # return 0 00:04:59.736 09:22:27 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.736 09:22:27 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:59.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.736 --rc genhtml_branch_coverage=1 00:04:59.736 --rc genhtml_function_coverage=1 00:04:59.736 --rc genhtml_legend=1 00:04:59.736 --rc geninfo_all_blocks=1 00:04:59.736 --rc geninfo_unexecuted_blocks=1 00:04:59.736 00:04:59.736 ' 00:04:59.736 09:22:27 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:59.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.736 --rc genhtml_branch_coverage=1 00:04:59.736 --rc genhtml_function_coverage=1 00:04:59.736 --rc genhtml_legend=1 00:04:59.736 --rc geninfo_all_blocks=1 00:04:59.736 --rc geninfo_unexecuted_blocks=1 00:04:59.736 00:04:59.736 ' 00:04:59.736 09:22:27 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:59.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.736 --rc genhtml_branch_coverage=1 00:04:59.736 --rc genhtml_function_coverage=1 00:04:59.736 --rc genhtml_legend=1 00:04:59.736 --rc geninfo_all_blocks=1 00:04:59.736 --rc geninfo_unexecuted_blocks=1 00:04:59.736 00:04:59.736 ' 00:04:59.736 09:22:27 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:59.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.736 --rc genhtml_branch_coverage=1 00:04:59.736 --rc genhtml_function_coverage=1 00:04:59.736 --rc genhtml_legend=1 00:04:59.736 --rc geninfo_all_blocks=1 00:04:59.736 --rc geninfo_unexecuted_blocks=1 00:04:59.736 00:04:59.736 ' 00:04:59.736 09:22:27 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:59.736 09:22:27 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.736 09:22:27 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.736 09:22:27 env -- common/autotest_common.sh@10 -- # set +x 00:04:59.736 ************************************ 00:04:59.736 START TEST env_memory 00:04:59.736 ************************************ 00:04:59.736 09:22:27 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:59.736 00:04:59.736 00:04:59.736 CUnit - A unit testing framework for C - Version 2.1-3 00:04:59.736 http://cunit.sourceforge.net/ 00:04:59.736 00:04:59.736 00:04:59.736 Suite: memory 00:04:59.736 Test: alloc and free memory map ...[2024-11-29 09:22:27.289503] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:59.736 passed 00:04:59.736 Test: mem map translation ...[2024-11-29 09:22:27.329233] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:59.736 [2024-11-29 09:22:27.329477] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:59.736 [2024-11-29 09:22:27.329627] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:59.736 [2024-11-29 09:22:27.329761] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:59.736 passed 00:04:59.736 Test: mem map registration ...[2024-11-29 09:22:27.398758] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:59.736 [2024-11-29 09:22:27.399009] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:59.736 passed 00:04:59.997 Test: mem map adjacent registrations ...passed 00:04:59.997 00:04:59.997 Run Summary: Type Total Ran Passed Failed Inactive 00:04:59.998 suites 1 1 n/a 0 0 00:04:59.998 tests 4 4 4 0 0 00:04:59.998 asserts 152 152 152 0 n/a 00:04:59.998 00:04:59.998 Elapsed time = 0.234 seconds 00:04:59.998 00:04:59.998 real 0m0.273s 00:04:59.998 user 0m0.247s 00:04:59.998 sys 0m0.017s 00:04:59.998 09:22:27 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.998 ************************************ 00:04:59.998 END TEST env_memory 00:04:59.998 ************************************ 00:04:59.998 09:22:27 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:59.998 09:22:27 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:59.998 09:22:27 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.998 09:22:27 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.998 09:22:27 env -- common/autotest_common.sh@10 -- # set +x 00:04:59.998 ************************************ 00:04:59.998 START TEST env_vtophys 00:04:59.998 ************************************ 00:04:59.998 09:22:27 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:59.998 EAL: lib.eal log level changed from notice to debug 00:04:59.998 EAL: Detected lcore 0 as core 0 on socket 0 00:04:59.998 EAL: Detected lcore 1 as core 0 on socket 0 00:04:59.998 EAL: Detected lcore 2 as core 0 on socket 0 00:04:59.998 EAL: Detected lcore 3 as core 0 on socket 0 00:04:59.998 EAL: Detected lcore 4 as core 0 on socket 0 00:04:59.998 EAL: Detected lcore 5 as core 0 on socket 0 00:04:59.998 EAL: Detected lcore 6 as core 0 on socket 0 00:04:59.998 EAL: Detected lcore 7 as core 0 on socket 0 00:04:59.998 EAL: Detected lcore 8 as core 0 on socket 0 00:04:59.998 EAL: Detected lcore 9 as core 0 on socket 0 00:04:59.998 EAL: Maximum logical cores by configuration: 128 00:04:59.998 EAL: Detected CPU lcores: 10 00:04:59.998 EAL: Detected NUMA nodes: 1 00:04:59.998 EAL: Checking presence of .so 'librte_eal.so.25.0' 00:04:59.998 EAL: Detected shared linkage of DPDK 00:04:59.998 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so.25.0 00:04:59.998 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so.25.0 00:04:59.998 EAL: Registered [vdev] bus. 00:04:59.998 EAL: bus.vdev log level changed from disabled to notice 00:04:59.998 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so.25.0 00:04:59.998 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so.25.0 00:04:59.998 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:04:59.998 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:04:59.998 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_acpi.so.25.0 00:04:59.998 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_amd_pstate.so.25.0 00:04:59.998 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_cppc.so.25.0 00:04:59.998 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_pstate.so.25.0 00:04:59.998 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_uncore.so.25.0 00:04:59.998 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_kvm_vm.so.25.0 00:04:59.998 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so 00:04:59.998 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so 00:04:59.998 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so 00:04:59.998 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so 00:04:59.998 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_acpi.so 00:04:59.998 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_amd_pstate.so 00:04:59.998 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_cppc.so 00:04:59.998 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_pstate.so 00:04:59.998 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_uncore.so 00:04:59.998 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_kvm_vm.so 00:04:59.998 EAL: No shared files mode enabled, IPC will be disabled 00:04:59.998 EAL: No shared files mode enabled, IPC is disabled 00:04:59.998 EAL: Selected IOVA mode 'PA' 00:04:59.998 EAL: Probing VFIO support... 00:04:59.998 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:59.998 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:59.998 EAL: Ask a virtual area of 0x2e000 bytes 00:04:59.998 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:59.998 EAL: Setting up physically contiguous memory... 00:04:59.998 EAL: Setting maximum number of open files to 524288 00:04:59.998 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:59.998 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:59.998 EAL: Ask a virtual area of 0x61000 bytes 00:04:59.998 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:59.998 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:59.998 EAL: Ask a virtual area of 0x400000000 bytes 00:04:59.998 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:59.998 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:59.998 EAL: Ask a virtual area of 0x61000 bytes 00:04:59.998 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:59.998 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:59.998 EAL: Ask a virtual area of 0x400000000 bytes 00:04:59.998 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:59.998 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:59.998 EAL: Ask a virtual area of 0x61000 bytes 00:04:59.998 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:59.998 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:59.998 EAL: Ask a virtual area of 0x400000000 bytes 00:04:59.998 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:59.998 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:59.998 EAL: Ask a virtual area of 0x61000 bytes 00:04:59.998 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:59.998 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:59.998 EAL: Ask a virtual area of 0x400000000 bytes 00:04:59.998 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:59.998 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:59.998 EAL: Hugepages will be freed exactly as allocated. 00:04:59.998 EAL: No shared files mode enabled, IPC is disabled 00:04:59.998 EAL: No shared files mode enabled, IPC is disabled 00:05:00.260 EAL: TSC frequency is ~2600000 KHz 00:05:00.260 EAL: Main lcore 0 is ready (tid=7f0fc94eca40;cpuset=[0]) 00:05:00.260 EAL: Trying to obtain current memory policy. 00:05:00.260 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.260 EAL: Restoring previous memory policy: 0 00:05:00.260 EAL: request: mp_malloc_sync 00:05:00.260 EAL: No shared files mode enabled, IPC is disabled 00:05:00.260 EAL: Heap on socket 0 was expanded by 2MB 00:05:00.260 EAL: Allocated 2112 bytes of per-lcore data with a 64-byte alignment 00:05:00.260 EAL: No shared files mode enabled, IPC is disabled 00:05:00.260 EAL: Mem event callback 'spdk:(nil)' registered 00:05:00.260 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:00.260 00:05:00.260 00:05:00.260 CUnit - A unit testing framework for C - Version 2.1-3 00:05:00.260 http://cunit.sourceforge.net/ 00:05:00.260 00:05:00.260 00:05:00.260 Suite: components_suite 00:05:00.522 Test: vtophys_malloc_test ...passed 00:05:00.522 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:00.522 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.522 EAL: Restoring previous memory policy: 4 00:05:00.522 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.522 EAL: request: mp_malloc_sync 00:05:00.522 EAL: No shared files mode enabled, IPC is disabled 00:05:00.522 EAL: Heap on socket 0 was expanded by 4MB 00:05:00.522 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.522 EAL: request: mp_malloc_sync 00:05:00.522 EAL: No shared files mode enabled, IPC is disabled 00:05:00.522 EAL: Heap on socket 0 was shrunk by 4MB 00:05:00.522 EAL: Trying to obtain current memory policy. 00:05:00.522 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.522 EAL: Restoring previous memory policy: 4 00:05:00.522 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.522 EAL: request: mp_malloc_sync 00:05:00.522 EAL: No shared files mode enabled, IPC is disabled 00:05:00.522 EAL: Heap on socket 0 was expanded by 6MB 00:05:00.522 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.522 EAL: request: mp_malloc_sync 00:05:00.522 EAL: No shared files mode enabled, IPC is disabled 00:05:00.522 EAL: Heap on socket 0 was shrunk by 6MB 00:05:00.522 EAL: Trying to obtain current memory policy. 00:05:00.522 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.522 EAL: Restoring previous memory policy: 4 00:05:00.522 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.522 EAL: request: mp_malloc_sync 00:05:00.522 EAL: No shared files mode enabled, IPC is disabled 00:05:00.522 EAL: Heap on socket 0 was expanded by 10MB 00:05:00.522 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.522 EAL: request: mp_malloc_sync 00:05:00.522 EAL: No shared files mode enabled, IPC is disabled 00:05:00.522 EAL: Heap on socket 0 was shrunk by 10MB 00:05:00.522 EAL: Trying to obtain current memory policy. 00:05:00.522 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.522 EAL: Restoring previous memory policy: 4 00:05:00.522 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.522 EAL: request: mp_malloc_sync 00:05:00.522 EAL: No shared files mode enabled, IPC is disabled 00:05:00.522 EAL: Heap on socket 0 was expanded by 18MB 00:05:00.522 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.522 EAL: request: mp_malloc_sync 00:05:00.522 EAL: No shared files mode enabled, IPC is disabled 00:05:00.522 EAL: Heap on socket 0 was shrunk by 18MB 00:05:00.522 EAL: Trying to obtain current memory policy. 00:05:00.522 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.522 EAL: Restoring previous memory policy: 4 00:05:00.522 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.522 EAL: request: mp_malloc_sync 00:05:00.522 EAL: No shared files mode enabled, IPC is disabled 00:05:00.522 EAL: Heap on socket 0 was expanded by 34MB 00:05:00.522 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.522 EAL: request: mp_malloc_sync 00:05:00.522 EAL: No shared files mode enabled, IPC is disabled 00:05:00.522 EAL: Heap on socket 0 was shrunk by 34MB 00:05:00.522 EAL: Trying to obtain current memory policy. 00:05:00.522 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.522 EAL: Restoring previous memory policy: 4 00:05:00.522 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.522 EAL: request: mp_malloc_sync 00:05:00.522 EAL: No shared files mode enabled, IPC is disabled 00:05:00.522 EAL: Heap on socket 0 was expanded by 66MB 00:05:00.522 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.522 EAL: request: mp_malloc_sync 00:05:00.522 EAL: No shared files mode enabled, IPC is disabled 00:05:00.522 EAL: Heap on socket 0 was shrunk by 66MB 00:05:00.522 EAL: Trying to obtain current memory policy. 00:05:00.522 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.522 EAL: Restoring previous memory policy: 4 00:05:00.522 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.522 EAL: request: mp_malloc_sync 00:05:00.522 EAL: No shared files mode enabled, IPC is disabled 00:05:00.522 EAL: Heap on socket 0 was expanded by 130MB 00:05:00.783 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.783 EAL: request: mp_malloc_sync 00:05:00.783 EAL: No shared files mode enabled, IPC is disabled 00:05:00.783 EAL: Heap on socket 0 was shrunk by 130MB 00:05:00.783 EAL: Trying to obtain current memory policy. 00:05:00.783 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.783 EAL: Restoring previous memory policy: 4 00:05:00.783 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.783 EAL: request: mp_malloc_sync 00:05:00.783 EAL: No shared files mode enabled, IPC is disabled 00:05:00.783 EAL: Heap on socket 0 was expanded by 258MB 00:05:00.783 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.783 EAL: request: mp_malloc_sync 00:05:00.783 EAL: No shared files mode enabled, IPC is disabled 00:05:00.783 EAL: Heap on socket 0 was shrunk by 258MB 00:05:00.783 EAL: Trying to obtain current memory policy. 00:05:00.783 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.783 EAL: Restoring previous memory policy: 4 00:05:00.783 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.783 EAL: request: mp_malloc_sync 00:05:00.783 EAL: No shared files mode enabled, IPC is disabled 00:05:00.783 EAL: Heap on socket 0 was expanded by 514MB 00:05:00.783 EAL: Calling mem event callback 'spdk:(nil)' 00:05:01.044 EAL: request: mp_malloc_sync 00:05:01.044 EAL: No shared files mode enabled, IPC is disabled 00:05:01.044 EAL: Heap on socket 0 was shrunk by 514MB 00:05:01.044 EAL: Trying to obtain current memory policy. 00:05:01.044 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:01.044 EAL: Restoring previous memory policy: 4 00:05:01.044 EAL: Calling mem event callback 'spdk:(nil)' 00:05:01.044 EAL: request: mp_malloc_sync 00:05:01.044 EAL: No shared files mode enabled, IPC is disabled 00:05:01.044 EAL: Heap on socket 0 was expanded by 1026MB 00:05:01.306 EAL: Calling mem event callback 'spdk:(nil)' 00:05:01.306 EAL: request: mp_malloc_sync 00:05:01.306 EAL: No shared files mode enabled, IPC is disabled 00:05:01.306 passed 00:05:01.306 00:05:01.306 Run Summary: Type Total Ran Passed Failed Inactive 00:05:01.306 suites 1 1 n/a 0 0 00:05:01.306 tests 2 2 2 0 0 00:05:01.306 asserts 5358 5358 5358 0 n/a 00:05:01.306 00:05:01.306 Elapsed time = 1.122 seconds 00:05:01.306 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:01.306 EAL: Calling mem event callback 'spdk:(nil)' 00:05:01.306 EAL: request: mp_malloc_sync 00:05:01.306 EAL: No shared files mode enabled, IPC is disabled 00:05:01.306 EAL: Heap on socket 0 was shrunk by 2MB 00:05:01.306 EAL: No shared files mode enabled, IPC is disabled 00:05:01.306 EAL: No shared files mode enabled, IPC is disabled 00:05:01.306 EAL: No shared files mode enabled, IPC is disabled 00:05:01.306 ************************************ 00:05:01.306 END TEST env_vtophys 00:05:01.306 ************************************ 00:05:01.306 00:05:01.306 real 0m1.385s 00:05:01.306 user 0m0.533s 00:05:01.306 sys 0m0.708s 00:05:01.306 09:22:28 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.306 09:22:28 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:01.306 09:22:29 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:01.306 09:22:29 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.306 09:22:29 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.306 09:22:29 env -- common/autotest_common.sh@10 -- # set +x 00:05:01.306 ************************************ 00:05:01.306 START TEST env_pci 00:05:01.306 ************************************ 00:05:01.306 09:22:29 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:01.567 00:05:01.567 00:05:01.567 CUnit - A unit testing framework for C - Version 2.1-3 00:05:01.567 http://cunit.sourceforge.net/ 00:05:01.567 00:05:01.567 00:05:01.567 Suite: pci 00:05:01.567 Test: pci_hook ...[2024-11-29 09:22:29.051803] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 70812 has claimed it 00:05:01.567 EAL: Cannot find device (10000:00:01.0) 00:05:01.567 EAL: Failed to attach device on primary process 00:05:01.567 passed 00:05:01.567 00:05:01.567 Run Summary: Type Total Ran Passed Failed Inactive 00:05:01.567 suites 1 1 n/a 0 0 00:05:01.567 tests 1 1 1 0 0 00:05:01.567 asserts 25 25 25 0 n/a 00:05:01.567 00:05:01.567 Elapsed time = 0.007 seconds 00:05:01.567 00:05:01.567 real 0m0.079s 00:05:01.567 user 0m0.035s 00:05:01.567 sys 0m0.042s 00:05:01.567 09:22:29 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.567 09:22:29 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:01.567 ************************************ 00:05:01.567 END TEST env_pci 00:05:01.567 ************************************ 00:05:01.567 09:22:29 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:01.567 09:22:29 env -- env/env.sh@15 -- # uname 00:05:01.567 09:22:29 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:01.567 09:22:29 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:01.567 09:22:29 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:01.567 09:22:29 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:01.567 09:22:29 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.567 09:22:29 env -- common/autotest_common.sh@10 -- # set +x 00:05:01.567 ************************************ 00:05:01.567 START TEST env_dpdk_post_init 00:05:01.567 ************************************ 00:05:01.567 09:22:29 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:01.567 EAL: Detected CPU lcores: 10 00:05:01.567 EAL: Detected NUMA nodes: 1 00:05:01.567 EAL: Detected shared linkage of DPDK 00:05:01.567 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:01.567 EAL: Selected IOVA mode 'PA' 00:05:01.829 Starting DPDK initialization... 00:05:01.829 Starting SPDK post initialization... 00:05:01.829 SPDK NVMe probe 00:05:01.829 Attaching to 0000:00:10.0 00:05:01.829 Attaching to 0000:00:11.0 00:05:01.829 Attaching to 0000:00:12.0 00:05:01.829 Attaching to 0000:00:13.0 00:05:01.829 Attached to 0000:00:13.0 00:05:01.829 Attached to 0000:00:10.0 00:05:01.829 Attached to 0000:00:11.0 00:05:01.829 Attached to 0000:00:12.0 00:05:01.829 Cleaning up... 00:05:01.829 00:05:01.829 real 0m0.246s 00:05:01.829 user 0m0.075s 00:05:01.829 sys 0m0.072s 00:05:01.829 ************************************ 00:05:01.829 END TEST env_dpdk_post_init 00:05:01.829 ************************************ 00:05:01.829 09:22:29 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.829 09:22:29 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:01.829 09:22:29 env -- env/env.sh@26 -- # uname 00:05:01.829 09:22:29 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:01.829 09:22:29 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:01.829 09:22:29 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.829 09:22:29 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.829 09:22:29 env -- common/autotest_common.sh@10 -- # set +x 00:05:01.829 ************************************ 00:05:01.829 START TEST env_mem_callbacks 00:05:01.829 ************************************ 00:05:01.829 09:22:29 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:01.829 EAL: Detected CPU lcores: 10 00:05:01.829 EAL: Detected NUMA nodes: 1 00:05:01.829 EAL: Detected shared linkage of DPDK 00:05:01.829 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:01.829 EAL: Selected IOVA mode 'PA' 00:05:02.091 00:05:02.091 00:05:02.091 CUnit - A unit testing framework for C - Version 2.1-3 00:05:02.091 http://cunit.sourceforge.net/ 00:05:02.091 00:05:02.091 00:05:02.091 Suite: memory 00:05:02.091 Test: test ... 00:05:02.091 register 0x200000200000 2097152 00:05:02.091 malloc 3145728 00:05:02.091 register 0x200000400000 4194304 00:05:02.091 buf 0x200000500000 len 3145728 PASSED 00:05:02.091 malloc 64 00:05:02.091 buf 0x2000004fff40 len 64 PASSED 00:05:02.091 malloc 4194304 00:05:02.091 register 0x200000800000 6291456 00:05:02.091 buf 0x200000a00000 len 4194304 PASSED 00:05:02.091 free 0x200000500000 3145728 00:05:02.091 free 0x2000004fff40 64 00:05:02.091 unregister 0x200000400000 4194304 PASSED 00:05:02.091 free 0x200000a00000 4194304 00:05:02.091 unregister 0x200000800000 6291456 PASSED 00:05:02.091 malloc 8388608 00:05:02.091 register 0x200000400000 10485760 00:05:02.091 buf 0x200000600000 len 8388608 PASSED 00:05:02.091 free 0x200000600000 8388608 00:05:02.091 unregister 0x200000400000 10485760 PASSED 00:05:02.091 passed 00:05:02.091 00:05:02.091 Run Summary: Type Total Ran Passed Failed Inactive 00:05:02.091 suites 1 1 n/a 0 0 00:05:02.091 tests 1 1 1 0 0 00:05:02.091 asserts 15 15 15 0 n/a 00:05:02.091 00:05:02.091 Elapsed time = 0.012 seconds 00:05:02.091 00:05:02.091 real 0m0.190s 00:05:02.091 user 0m0.027s 00:05:02.091 sys 0m0.060s 00:05:02.091 09:22:29 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.091 ************************************ 00:05:02.091 END TEST env_mem_callbacks 00:05:02.091 ************************************ 00:05:02.091 09:22:29 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:02.091 00:05:02.091 real 0m2.676s 00:05:02.091 user 0m1.083s 00:05:02.091 sys 0m1.118s 00:05:02.091 09:22:29 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.091 09:22:29 env -- common/autotest_common.sh@10 -- # set +x 00:05:02.091 ************************************ 00:05:02.091 END TEST env 00:05:02.091 ************************************ 00:05:02.091 09:22:29 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:02.091 09:22:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.091 09:22:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.091 09:22:29 -- common/autotest_common.sh@10 -- # set +x 00:05:02.091 ************************************ 00:05:02.091 START TEST rpc 00:05:02.091 ************************************ 00:05:02.091 09:22:29 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:02.353 * Looking for test storage... 00:05:02.353 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:02.353 09:22:29 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:02.353 09:22:29 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:02.353 09:22:29 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:02.353 09:22:29 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:02.353 09:22:29 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:02.353 09:22:29 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:02.353 09:22:29 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:02.353 09:22:29 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.353 09:22:29 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:02.353 09:22:29 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:02.353 09:22:29 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:02.353 09:22:29 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:02.353 09:22:29 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:02.353 09:22:29 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:02.353 09:22:29 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:02.353 09:22:29 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:02.353 09:22:29 rpc -- scripts/common.sh@345 -- # : 1 00:05:02.353 09:22:29 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:02.353 09:22:29 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.353 09:22:29 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:02.353 09:22:29 rpc -- scripts/common.sh@353 -- # local d=1 00:05:02.353 09:22:29 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.353 09:22:29 rpc -- scripts/common.sh@355 -- # echo 1 00:05:02.353 09:22:29 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:02.353 09:22:29 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:02.353 09:22:29 rpc -- scripts/common.sh@353 -- # local d=2 00:05:02.353 09:22:29 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.353 09:22:29 rpc -- scripts/common.sh@355 -- # echo 2 00:05:02.353 09:22:29 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:02.353 09:22:29 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:02.353 09:22:29 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:02.353 09:22:29 rpc -- scripts/common.sh@368 -- # return 0 00:05:02.353 09:22:29 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.353 09:22:29 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:02.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.353 --rc genhtml_branch_coverage=1 00:05:02.353 --rc genhtml_function_coverage=1 00:05:02.353 --rc genhtml_legend=1 00:05:02.353 --rc geninfo_all_blocks=1 00:05:02.353 --rc geninfo_unexecuted_blocks=1 00:05:02.353 00:05:02.353 ' 00:05:02.353 09:22:29 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:02.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.353 --rc genhtml_branch_coverage=1 00:05:02.353 --rc genhtml_function_coverage=1 00:05:02.353 --rc genhtml_legend=1 00:05:02.353 --rc geninfo_all_blocks=1 00:05:02.353 --rc geninfo_unexecuted_blocks=1 00:05:02.353 00:05:02.353 ' 00:05:02.353 09:22:29 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:02.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.353 --rc genhtml_branch_coverage=1 00:05:02.353 --rc genhtml_function_coverage=1 00:05:02.353 --rc genhtml_legend=1 00:05:02.353 --rc geninfo_all_blocks=1 00:05:02.353 --rc geninfo_unexecuted_blocks=1 00:05:02.353 00:05:02.353 ' 00:05:02.353 09:22:29 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:02.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.353 --rc genhtml_branch_coverage=1 00:05:02.353 --rc genhtml_function_coverage=1 00:05:02.353 --rc genhtml_legend=1 00:05:02.353 --rc geninfo_all_blocks=1 00:05:02.353 --rc geninfo_unexecuted_blocks=1 00:05:02.353 00:05:02.353 ' 00:05:02.353 09:22:29 rpc -- rpc/rpc.sh@65 -- # spdk_pid=70939 00:05:02.353 09:22:29 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:02.353 09:22:29 rpc -- rpc/rpc.sh@67 -- # waitforlisten 70939 00:05:02.353 09:22:29 rpc -- common/autotest_common.sh@835 -- # '[' -z 70939 ']' 00:05:02.353 09:22:29 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.353 09:22:29 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.353 09:22:29 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.353 09:22:29 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.353 09:22:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.353 09:22:29 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:02.353 [2024-11-29 09:22:30.053167] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:05:02.353 [2024-11-29 09:22:30.053325] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70939 ] 00:05:02.615 [2024-11-29 09:22:30.189444] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:02.615 [2024-11-29 09:22:30.219030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.615 [2024-11-29 09:22:30.250133] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:02.615 [2024-11-29 09:22:30.250208] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 70939' to capture a snapshot of events at runtime. 00:05:02.615 [2024-11-29 09:22:30.250219] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:02.615 [2024-11-29 09:22:30.250234] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:02.615 [2024-11-29 09:22:30.250246] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid70939 for offline analysis/debug. 00:05:02.615 [2024-11-29 09:22:30.250701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.187 09:22:30 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.187 09:22:30 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:03.187 09:22:30 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:03.187 09:22:30 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:03.187 09:22:30 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:03.187 09:22:30 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:03.187 09:22:30 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.187 09:22:30 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.187 09:22:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.449 ************************************ 00:05:03.449 START TEST rpc_integrity 00:05:03.449 ************************************ 00:05:03.449 09:22:30 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:03.449 09:22:30 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:03.450 09:22:30 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.450 09:22:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.450 09:22:30 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.450 09:22:30 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:03.450 09:22:30 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:03.450 09:22:30 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:03.450 09:22:30 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:03.450 09:22:30 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.450 09:22:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.450 09:22:30 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.450 09:22:30 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:03.450 09:22:30 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:03.450 09:22:30 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.450 09:22:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.450 09:22:30 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.450 09:22:31 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:03.450 { 00:05:03.450 "name": "Malloc0", 00:05:03.450 "aliases": [ 00:05:03.450 "df14e1b9-c290-4dd6-a817-0d5545bfdecd" 00:05:03.450 ], 00:05:03.450 "product_name": "Malloc disk", 00:05:03.450 "block_size": 512, 00:05:03.450 "num_blocks": 16384, 00:05:03.450 "uuid": "df14e1b9-c290-4dd6-a817-0d5545bfdecd", 00:05:03.450 "assigned_rate_limits": { 00:05:03.450 "rw_ios_per_sec": 0, 00:05:03.450 "rw_mbytes_per_sec": 0, 00:05:03.450 "r_mbytes_per_sec": 0, 00:05:03.450 "w_mbytes_per_sec": 0 00:05:03.450 }, 00:05:03.450 "claimed": false, 00:05:03.450 "zoned": false, 00:05:03.450 "supported_io_types": { 00:05:03.450 "read": true, 00:05:03.450 "write": true, 00:05:03.450 "unmap": true, 00:05:03.450 "flush": true, 00:05:03.450 "reset": true, 00:05:03.450 "nvme_admin": false, 00:05:03.450 "nvme_io": false, 00:05:03.450 "nvme_io_md": false, 00:05:03.450 "write_zeroes": true, 00:05:03.450 "zcopy": true, 00:05:03.450 "get_zone_info": false, 00:05:03.450 "zone_management": false, 00:05:03.450 "zone_append": false, 00:05:03.450 "compare": false, 00:05:03.450 "compare_and_write": false, 00:05:03.450 "abort": true, 00:05:03.450 "seek_hole": false, 00:05:03.450 "seek_data": false, 00:05:03.450 "copy": true, 00:05:03.450 "nvme_iov_md": false 00:05:03.450 }, 00:05:03.450 "memory_domains": [ 00:05:03.450 { 00:05:03.450 "dma_device_id": "system", 00:05:03.450 "dma_device_type": 1 00:05:03.450 }, 00:05:03.450 { 00:05:03.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.450 "dma_device_type": 2 00:05:03.450 } 00:05:03.450 ], 00:05:03.450 "driver_specific": {} 00:05:03.450 } 00:05:03.450 ]' 00:05:03.450 09:22:31 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:03.450 09:22:31 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:03.450 09:22:31 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:03.450 09:22:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.450 09:22:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.450 [2024-11-29 09:22:31.038640] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:03.450 [2024-11-29 09:22:31.038727] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:03.450 [2024-11-29 09:22:31.038769] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:05:03.450 [2024-11-29 09:22:31.038784] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:03.450 [2024-11-29 09:22:31.041391] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:03.450 [2024-11-29 09:22:31.041453] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:03.450 Passthru0 00:05:03.450 09:22:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.450 09:22:31 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:03.450 09:22:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.450 09:22:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.450 09:22:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.450 09:22:31 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:03.450 { 00:05:03.450 "name": "Malloc0", 00:05:03.450 "aliases": [ 00:05:03.450 "df14e1b9-c290-4dd6-a817-0d5545bfdecd" 00:05:03.450 ], 00:05:03.450 "product_name": "Malloc disk", 00:05:03.450 "block_size": 512, 00:05:03.450 "num_blocks": 16384, 00:05:03.450 "uuid": "df14e1b9-c290-4dd6-a817-0d5545bfdecd", 00:05:03.450 "assigned_rate_limits": { 00:05:03.450 "rw_ios_per_sec": 0, 00:05:03.450 "rw_mbytes_per_sec": 0, 00:05:03.450 "r_mbytes_per_sec": 0, 00:05:03.450 "w_mbytes_per_sec": 0 00:05:03.450 }, 00:05:03.450 "claimed": true, 00:05:03.450 "claim_type": "exclusive_write", 00:05:03.450 "zoned": false, 00:05:03.450 "supported_io_types": { 00:05:03.450 "read": true, 00:05:03.450 "write": true, 00:05:03.450 "unmap": true, 00:05:03.450 "flush": true, 00:05:03.450 "reset": true, 00:05:03.450 "nvme_admin": false, 00:05:03.450 "nvme_io": false, 00:05:03.450 "nvme_io_md": false, 00:05:03.450 "write_zeroes": true, 00:05:03.450 "zcopy": true, 00:05:03.450 "get_zone_info": false, 00:05:03.450 "zone_management": false, 00:05:03.450 "zone_append": false, 00:05:03.450 "compare": false, 00:05:03.450 "compare_and_write": false, 00:05:03.450 "abort": true, 00:05:03.450 "seek_hole": false, 00:05:03.450 "seek_data": false, 00:05:03.450 "copy": true, 00:05:03.450 "nvme_iov_md": false 00:05:03.450 }, 00:05:03.450 "memory_domains": [ 00:05:03.450 { 00:05:03.450 "dma_device_id": "system", 00:05:03.450 "dma_device_type": 1 00:05:03.450 }, 00:05:03.450 { 00:05:03.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.450 "dma_device_type": 2 00:05:03.450 } 00:05:03.450 ], 00:05:03.450 "driver_specific": {} 00:05:03.450 }, 00:05:03.450 { 00:05:03.450 "name": "Passthru0", 00:05:03.450 "aliases": [ 00:05:03.450 "5358fdcf-21c0-58e0-9b62-17eab945e89d" 00:05:03.450 ], 00:05:03.450 "product_name": "passthru", 00:05:03.450 "block_size": 512, 00:05:03.450 "num_blocks": 16384, 00:05:03.450 "uuid": "5358fdcf-21c0-58e0-9b62-17eab945e89d", 00:05:03.450 "assigned_rate_limits": { 00:05:03.450 "rw_ios_per_sec": 0, 00:05:03.450 "rw_mbytes_per_sec": 0, 00:05:03.450 "r_mbytes_per_sec": 0, 00:05:03.450 "w_mbytes_per_sec": 0 00:05:03.450 }, 00:05:03.450 "claimed": false, 00:05:03.450 "zoned": false, 00:05:03.450 "supported_io_types": { 00:05:03.450 "read": true, 00:05:03.450 "write": true, 00:05:03.450 "unmap": true, 00:05:03.450 "flush": true, 00:05:03.450 "reset": true, 00:05:03.450 "nvme_admin": false, 00:05:03.450 "nvme_io": false, 00:05:03.450 "nvme_io_md": false, 00:05:03.450 "write_zeroes": true, 00:05:03.450 "zcopy": true, 00:05:03.450 "get_zone_info": false, 00:05:03.450 "zone_management": false, 00:05:03.450 "zone_append": false, 00:05:03.450 "compare": false, 00:05:03.450 "compare_and_write": false, 00:05:03.450 "abort": true, 00:05:03.450 "seek_hole": false, 00:05:03.451 "seek_data": false, 00:05:03.451 "copy": true, 00:05:03.451 "nvme_iov_md": false 00:05:03.451 }, 00:05:03.451 "memory_domains": [ 00:05:03.451 { 00:05:03.451 "dma_device_id": "system", 00:05:03.451 "dma_device_type": 1 00:05:03.451 }, 00:05:03.451 { 00:05:03.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.451 "dma_device_type": 2 00:05:03.451 } 00:05:03.451 ], 00:05:03.451 "driver_specific": { 00:05:03.451 "passthru": { 00:05:03.451 "name": "Passthru0", 00:05:03.451 "base_bdev_name": "Malloc0" 00:05:03.451 } 00:05:03.451 } 00:05:03.451 } 00:05:03.451 ]' 00:05:03.451 09:22:31 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:03.451 09:22:31 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:03.451 09:22:31 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:03.451 09:22:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.451 09:22:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.451 09:22:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.451 09:22:31 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:03.451 09:22:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.451 09:22:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.451 09:22:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.451 09:22:31 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:03.451 09:22:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.451 09:22:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.451 09:22:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.451 09:22:31 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:03.451 09:22:31 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:03.451 09:22:31 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:03.451 00:05:03.451 real 0m0.232s 00:05:03.451 user 0m0.134s 00:05:03.451 sys 0m0.035s 00:05:03.451 ************************************ 00:05:03.451 END TEST rpc_integrity 00:05:03.451 ************************************ 00:05:03.451 09:22:31 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.451 09:22:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.713 09:22:31 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:03.713 09:22:31 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.713 09:22:31 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.713 09:22:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.713 ************************************ 00:05:03.713 START TEST rpc_plugins 00:05:03.713 ************************************ 00:05:03.713 09:22:31 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:03.713 09:22:31 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:03.714 09:22:31 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.714 09:22:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:03.714 09:22:31 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.714 09:22:31 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:03.714 09:22:31 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:03.714 09:22:31 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.714 09:22:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:03.714 09:22:31 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.714 09:22:31 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:03.714 { 00:05:03.714 "name": "Malloc1", 00:05:03.714 "aliases": [ 00:05:03.714 "f6825f80-23fa-4d52-b5e9-adc100153387" 00:05:03.714 ], 00:05:03.714 "product_name": "Malloc disk", 00:05:03.714 "block_size": 4096, 00:05:03.714 "num_blocks": 256, 00:05:03.714 "uuid": "f6825f80-23fa-4d52-b5e9-adc100153387", 00:05:03.714 "assigned_rate_limits": { 00:05:03.714 "rw_ios_per_sec": 0, 00:05:03.714 "rw_mbytes_per_sec": 0, 00:05:03.714 "r_mbytes_per_sec": 0, 00:05:03.714 "w_mbytes_per_sec": 0 00:05:03.714 }, 00:05:03.714 "claimed": false, 00:05:03.714 "zoned": false, 00:05:03.714 "supported_io_types": { 00:05:03.714 "read": true, 00:05:03.714 "write": true, 00:05:03.714 "unmap": true, 00:05:03.714 "flush": true, 00:05:03.714 "reset": true, 00:05:03.714 "nvme_admin": false, 00:05:03.714 "nvme_io": false, 00:05:03.714 "nvme_io_md": false, 00:05:03.714 "write_zeroes": true, 00:05:03.714 "zcopy": true, 00:05:03.714 "get_zone_info": false, 00:05:03.714 "zone_management": false, 00:05:03.714 "zone_append": false, 00:05:03.714 "compare": false, 00:05:03.714 "compare_and_write": false, 00:05:03.714 "abort": true, 00:05:03.714 "seek_hole": false, 00:05:03.714 "seek_data": false, 00:05:03.714 "copy": true, 00:05:03.714 "nvme_iov_md": false 00:05:03.714 }, 00:05:03.714 "memory_domains": [ 00:05:03.714 { 00:05:03.714 "dma_device_id": "system", 00:05:03.714 "dma_device_type": 1 00:05:03.714 }, 00:05:03.714 { 00:05:03.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.714 "dma_device_type": 2 00:05:03.714 } 00:05:03.714 ], 00:05:03.714 "driver_specific": {} 00:05:03.714 } 00:05:03.714 ]' 00:05:03.714 09:22:31 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:03.714 09:22:31 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:03.714 09:22:31 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:03.714 09:22:31 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.714 09:22:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:03.714 09:22:31 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.714 09:22:31 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:03.714 09:22:31 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.714 09:22:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:03.714 09:22:31 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.714 09:22:31 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:03.714 09:22:31 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:03.714 09:22:31 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:03.714 00:05:03.714 real 0m0.113s 00:05:03.714 user 0m0.060s 00:05:03.714 sys 0m0.019s 00:05:03.714 ************************************ 00:05:03.714 END TEST rpc_plugins 00:05:03.714 ************************************ 00:05:03.714 09:22:31 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.714 09:22:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:03.714 09:22:31 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:03.714 09:22:31 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.714 09:22:31 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.714 09:22:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.714 ************************************ 00:05:03.714 START TEST rpc_trace_cmd_test 00:05:03.714 ************************************ 00:05:03.714 09:22:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:03.714 09:22:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:03.714 09:22:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:03.714 09:22:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.714 09:22:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:03.714 09:22:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.714 09:22:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:03.714 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid70939", 00:05:03.714 "tpoint_group_mask": "0x8", 00:05:03.714 "iscsi_conn": { 00:05:03.714 "mask": "0x2", 00:05:03.714 "tpoint_mask": "0x0" 00:05:03.714 }, 00:05:03.714 "scsi": { 00:05:03.714 "mask": "0x4", 00:05:03.714 "tpoint_mask": "0x0" 00:05:03.714 }, 00:05:03.714 "bdev": { 00:05:03.714 "mask": "0x8", 00:05:03.714 "tpoint_mask": "0xffffffffffffffff" 00:05:03.714 }, 00:05:03.714 "nvmf_rdma": { 00:05:03.714 "mask": "0x10", 00:05:03.714 "tpoint_mask": "0x0" 00:05:03.714 }, 00:05:03.714 "nvmf_tcp": { 00:05:03.714 "mask": "0x20", 00:05:03.714 "tpoint_mask": "0x0" 00:05:03.714 }, 00:05:03.714 "ftl": { 00:05:03.714 "mask": "0x40", 00:05:03.714 "tpoint_mask": "0x0" 00:05:03.714 }, 00:05:03.714 "blobfs": { 00:05:03.714 "mask": "0x80", 00:05:03.714 "tpoint_mask": "0x0" 00:05:03.714 }, 00:05:03.714 "dsa": { 00:05:03.714 "mask": "0x200", 00:05:03.714 "tpoint_mask": "0x0" 00:05:03.714 }, 00:05:03.714 "thread": { 00:05:03.714 "mask": "0x400", 00:05:03.714 "tpoint_mask": "0x0" 00:05:03.714 }, 00:05:03.714 "nvme_pcie": { 00:05:03.714 "mask": "0x800", 00:05:03.714 "tpoint_mask": "0x0" 00:05:03.714 }, 00:05:03.714 "iaa": { 00:05:03.714 "mask": "0x1000", 00:05:03.714 "tpoint_mask": "0x0" 00:05:03.714 }, 00:05:03.714 "nvme_tcp": { 00:05:03.714 "mask": "0x2000", 00:05:03.714 "tpoint_mask": "0x0" 00:05:03.714 }, 00:05:03.714 "bdev_nvme": { 00:05:03.714 "mask": "0x4000", 00:05:03.714 "tpoint_mask": "0x0" 00:05:03.714 }, 00:05:03.714 "sock": { 00:05:03.714 "mask": "0x8000", 00:05:03.714 "tpoint_mask": "0x0" 00:05:03.714 }, 00:05:03.714 "blob": { 00:05:03.714 "mask": "0x10000", 00:05:03.714 "tpoint_mask": "0x0" 00:05:03.714 }, 00:05:03.714 "bdev_raid": { 00:05:03.714 "mask": "0x20000", 00:05:03.714 "tpoint_mask": "0x0" 00:05:03.714 }, 00:05:03.714 "scheduler": { 00:05:03.714 "mask": "0x40000", 00:05:03.714 "tpoint_mask": "0x0" 00:05:03.714 } 00:05:03.714 }' 00:05:03.714 09:22:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:03.977 09:22:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:03.977 09:22:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:03.977 09:22:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:03.977 09:22:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:03.977 09:22:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:03.977 09:22:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:03.977 09:22:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:03.977 09:22:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:03.977 09:22:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:03.977 00:05:03.977 real 0m0.171s 00:05:03.977 user 0m0.135s 00:05:03.977 sys 0m0.025s 00:05:03.977 09:22:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.977 ************************************ 00:05:03.977 END TEST rpc_trace_cmd_test 00:05:03.977 ************************************ 00:05:03.977 09:22:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:03.977 09:22:31 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:03.977 09:22:31 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:03.977 09:22:31 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:03.977 09:22:31 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.977 09:22:31 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.977 09:22:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.977 ************************************ 00:05:03.977 START TEST rpc_daemon_integrity 00:05:03.977 ************************************ 00:05:03.977 09:22:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:03.977 09:22:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:03.977 09:22:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.977 09:22:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.977 09:22:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.977 09:22:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:03.977 09:22:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:03.977 09:22:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:03.977 09:22:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:03.977 09:22:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.977 09:22:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.239 09:22:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.239 09:22:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:04.239 09:22:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:04.239 09:22:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.239 09:22:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.239 09:22:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.239 09:22:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:04.239 { 00:05:04.239 "name": "Malloc2", 00:05:04.239 "aliases": [ 00:05:04.239 "163c15b9-3a10-48f8-927e-800ee0025442" 00:05:04.239 ], 00:05:04.239 "product_name": "Malloc disk", 00:05:04.239 "block_size": 512, 00:05:04.239 "num_blocks": 16384, 00:05:04.239 "uuid": "163c15b9-3a10-48f8-927e-800ee0025442", 00:05:04.239 "assigned_rate_limits": { 00:05:04.239 "rw_ios_per_sec": 0, 00:05:04.239 "rw_mbytes_per_sec": 0, 00:05:04.239 "r_mbytes_per_sec": 0, 00:05:04.239 "w_mbytes_per_sec": 0 00:05:04.239 }, 00:05:04.239 "claimed": false, 00:05:04.239 "zoned": false, 00:05:04.239 "supported_io_types": { 00:05:04.239 "read": true, 00:05:04.239 "write": true, 00:05:04.239 "unmap": true, 00:05:04.239 "flush": true, 00:05:04.239 "reset": true, 00:05:04.239 "nvme_admin": false, 00:05:04.239 "nvme_io": false, 00:05:04.239 "nvme_io_md": false, 00:05:04.239 "write_zeroes": true, 00:05:04.239 "zcopy": true, 00:05:04.239 "get_zone_info": false, 00:05:04.239 "zone_management": false, 00:05:04.239 "zone_append": false, 00:05:04.239 "compare": false, 00:05:04.239 "compare_and_write": false, 00:05:04.239 "abort": true, 00:05:04.239 "seek_hole": false, 00:05:04.239 "seek_data": false, 00:05:04.239 "copy": true, 00:05:04.239 "nvme_iov_md": false 00:05:04.239 }, 00:05:04.239 "memory_domains": [ 00:05:04.239 { 00:05:04.239 "dma_device_id": "system", 00:05:04.239 "dma_device_type": 1 00:05:04.239 }, 00:05:04.239 { 00:05:04.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:04.239 "dma_device_type": 2 00:05:04.239 } 00:05:04.239 ], 00:05:04.239 "driver_specific": {} 00:05:04.239 } 00:05:04.239 ]' 00:05:04.239 09:22:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:04.239 09:22:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:04.239 09:22:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:04.239 09:22:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.239 09:22:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.239 [2024-11-29 09:22:31.762364] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:04.239 [2024-11-29 09:22:31.762455] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:04.239 [2024-11-29 09:22:31.762484] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:05:04.239 [2024-11-29 09:22:31.762498] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:04.239 [2024-11-29 09:22:31.765280] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:04.239 [2024-11-29 09:22:31.765339] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:04.239 Passthru0 00:05:04.239 09:22:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.239 09:22:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:04.239 09:22:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.239 09:22:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.239 09:22:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.239 09:22:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:04.239 { 00:05:04.239 "name": "Malloc2", 00:05:04.239 "aliases": [ 00:05:04.239 "163c15b9-3a10-48f8-927e-800ee0025442" 00:05:04.239 ], 00:05:04.239 "product_name": "Malloc disk", 00:05:04.239 "block_size": 512, 00:05:04.239 "num_blocks": 16384, 00:05:04.239 "uuid": "163c15b9-3a10-48f8-927e-800ee0025442", 00:05:04.239 "assigned_rate_limits": { 00:05:04.239 "rw_ios_per_sec": 0, 00:05:04.239 "rw_mbytes_per_sec": 0, 00:05:04.239 "r_mbytes_per_sec": 0, 00:05:04.239 "w_mbytes_per_sec": 0 00:05:04.239 }, 00:05:04.239 "claimed": true, 00:05:04.239 "claim_type": "exclusive_write", 00:05:04.239 "zoned": false, 00:05:04.239 "supported_io_types": { 00:05:04.239 "read": true, 00:05:04.239 "write": true, 00:05:04.239 "unmap": true, 00:05:04.239 "flush": true, 00:05:04.239 "reset": true, 00:05:04.239 "nvme_admin": false, 00:05:04.239 "nvme_io": false, 00:05:04.239 "nvme_io_md": false, 00:05:04.239 "write_zeroes": true, 00:05:04.239 "zcopy": true, 00:05:04.239 "get_zone_info": false, 00:05:04.239 "zone_management": false, 00:05:04.239 "zone_append": false, 00:05:04.239 "compare": false, 00:05:04.239 "compare_and_write": false, 00:05:04.239 "abort": true, 00:05:04.239 "seek_hole": false, 00:05:04.239 "seek_data": false, 00:05:04.239 "copy": true, 00:05:04.239 "nvme_iov_md": false 00:05:04.239 }, 00:05:04.239 "memory_domains": [ 00:05:04.239 { 00:05:04.239 "dma_device_id": "system", 00:05:04.239 "dma_device_type": 1 00:05:04.239 }, 00:05:04.239 { 00:05:04.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:04.239 "dma_device_type": 2 00:05:04.239 } 00:05:04.239 ], 00:05:04.239 "driver_specific": {} 00:05:04.239 }, 00:05:04.239 { 00:05:04.239 "name": "Passthru0", 00:05:04.239 "aliases": [ 00:05:04.239 "aeeab948-4a17-5600-9594-f51df3f68cba" 00:05:04.239 ], 00:05:04.239 "product_name": "passthru", 00:05:04.239 "block_size": 512, 00:05:04.239 "num_blocks": 16384, 00:05:04.239 "uuid": "aeeab948-4a17-5600-9594-f51df3f68cba", 00:05:04.239 "assigned_rate_limits": { 00:05:04.239 "rw_ios_per_sec": 0, 00:05:04.239 "rw_mbytes_per_sec": 0, 00:05:04.239 "r_mbytes_per_sec": 0, 00:05:04.239 "w_mbytes_per_sec": 0 00:05:04.239 }, 00:05:04.239 "claimed": false, 00:05:04.239 "zoned": false, 00:05:04.239 "supported_io_types": { 00:05:04.239 "read": true, 00:05:04.239 "write": true, 00:05:04.239 "unmap": true, 00:05:04.239 "flush": true, 00:05:04.239 "reset": true, 00:05:04.239 "nvme_admin": false, 00:05:04.239 "nvme_io": false, 00:05:04.239 "nvme_io_md": false, 00:05:04.239 "write_zeroes": true, 00:05:04.239 "zcopy": true, 00:05:04.239 "get_zone_info": false, 00:05:04.239 "zone_management": false, 00:05:04.239 "zone_append": false, 00:05:04.239 "compare": false, 00:05:04.240 "compare_and_write": false, 00:05:04.240 "abort": true, 00:05:04.240 "seek_hole": false, 00:05:04.240 "seek_data": false, 00:05:04.240 "copy": true, 00:05:04.240 "nvme_iov_md": false 00:05:04.240 }, 00:05:04.240 "memory_domains": [ 00:05:04.240 { 00:05:04.240 "dma_device_id": "system", 00:05:04.240 "dma_device_type": 1 00:05:04.240 }, 00:05:04.240 { 00:05:04.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:04.240 "dma_device_type": 2 00:05:04.240 } 00:05:04.240 ], 00:05:04.240 "driver_specific": { 00:05:04.240 "passthru": { 00:05:04.240 "name": "Passthru0", 00:05:04.240 "base_bdev_name": "Malloc2" 00:05:04.240 } 00:05:04.240 } 00:05:04.240 } 00:05:04.240 ]' 00:05:04.240 09:22:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:04.240 09:22:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:04.240 09:22:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:04.240 09:22:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.240 09:22:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.240 09:22:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.240 09:22:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:04.240 09:22:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.240 09:22:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.240 09:22:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.240 09:22:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:04.240 09:22:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.240 09:22:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.240 09:22:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.240 09:22:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:04.240 09:22:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:04.240 09:22:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:04.240 00:05:04.240 real 0m0.230s 00:05:04.240 user 0m0.131s 00:05:04.240 sys 0m0.034s 00:05:04.240 09:22:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.240 ************************************ 00:05:04.240 END TEST rpc_daemon_integrity 00:05:04.240 ************************************ 00:05:04.240 09:22:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.240 09:22:31 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:04.240 09:22:31 rpc -- rpc/rpc.sh@84 -- # killprocess 70939 00:05:04.240 09:22:31 rpc -- common/autotest_common.sh@954 -- # '[' -z 70939 ']' 00:05:04.240 09:22:31 rpc -- common/autotest_common.sh@958 -- # kill -0 70939 00:05:04.240 09:22:31 rpc -- common/autotest_common.sh@959 -- # uname 00:05:04.240 09:22:31 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:04.240 09:22:31 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70939 00:05:04.240 09:22:31 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:04.240 killing process with pid 70939 00:05:04.240 09:22:31 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:04.240 09:22:31 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70939' 00:05:04.240 09:22:31 rpc -- common/autotest_common.sh@973 -- # kill 70939 00:05:04.240 09:22:31 rpc -- common/autotest_common.sh@978 -- # wait 70939 00:05:04.829 00:05:04.829 real 0m2.663s 00:05:04.829 user 0m3.020s 00:05:04.829 sys 0m0.725s 00:05:04.829 09:22:32 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.829 09:22:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.829 ************************************ 00:05:04.829 END TEST rpc 00:05:04.829 ************************************ 00:05:04.830 09:22:32 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:04.830 09:22:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.830 09:22:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.830 09:22:32 -- common/autotest_common.sh@10 -- # set +x 00:05:05.104 ************************************ 00:05:05.104 START TEST skip_rpc 00:05:05.104 ************************************ 00:05:05.104 09:22:32 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:05.104 * Looking for test storage... 00:05:05.104 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:05.104 09:22:32 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:05.104 09:22:32 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:05.104 09:22:32 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:05.104 09:22:32 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:05.104 09:22:32 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.104 09:22:32 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.104 09:22:32 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.104 09:22:32 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.104 09:22:32 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.104 09:22:32 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.104 09:22:32 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.104 09:22:32 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.104 09:22:32 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.104 09:22:32 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.104 09:22:32 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.104 09:22:32 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:05.104 09:22:32 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:05.104 09:22:32 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.104 09:22:32 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.104 09:22:32 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:05.104 09:22:32 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:05.104 09:22:32 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.104 09:22:32 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:05.104 09:22:32 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.104 09:22:32 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:05.104 09:22:32 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:05.104 09:22:32 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.104 09:22:32 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:05.104 09:22:32 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.104 09:22:32 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.104 09:22:32 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.104 09:22:32 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:05.104 09:22:32 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.104 09:22:32 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:05.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.104 --rc genhtml_branch_coverage=1 00:05:05.104 --rc genhtml_function_coverage=1 00:05:05.104 --rc genhtml_legend=1 00:05:05.104 --rc geninfo_all_blocks=1 00:05:05.104 --rc geninfo_unexecuted_blocks=1 00:05:05.104 00:05:05.104 ' 00:05:05.104 09:22:32 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:05.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.104 --rc genhtml_branch_coverage=1 00:05:05.104 --rc genhtml_function_coverage=1 00:05:05.104 --rc genhtml_legend=1 00:05:05.104 --rc geninfo_all_blocks=1 00:05:05.104 --rc geninfo_unexecuted_blocks=1 00:05:05.104 00:05:05.104 ' 00:05:05.104 09:22:32 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:05.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.104 --rc genhtml_branch_coverage=1 00:05:05.104 --rc genhtml_function_coverage=1 00:05:05.104 --rc genhtml_legend=1 00:05:05.104 --rc geninfo_all_blocks=1 00:05:05.104 --rc geninfo_unexecuted_blocks=1 00:05:05.104 00:05:05.104 ' 00:05:05.104 09:22:32 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:05.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.104 --rc genhtml_branch_coverage=1 00:05:05.104 --rc genhtml_function_coverage=1 00:05:05.104 --rc genhtml_legend=1 00:05:05.104 --rc geninfo_all_blocks=1 00:05:05.104 --rc geninfo_unexecuted_blocks=1 00:05:05.104 00:05:05.104 ' 00:05:05.104 09:22:32 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:05.104 09:22:32 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:05.104 09:22:32 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:05.104 09:22:32 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.104 09:22:32 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.104 09:22:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.104 ************************************ 00:05:05.104 START TEST skip_rpc 00:05:05.104 ************************************ 00:05:05.104 09:22:32 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:05.104 09:22:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=71141 00:05:05.104 09:22:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:05.104 09:22:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:05.104 09:22:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:05.104 [2024-11-29 09:22:32.811334] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:05:05.104 [2024-11-29 09:22:32.811497] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71141 ] 00:05:05.366 [2024-11-29 09:22:32.950776] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:05.366 [2024-11-29 09:22:32.978886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.366 [2024-11-29 09:22:33.017813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.685 09:22:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:10.685 09:22:37 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:10.685 09:22:37 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:10.685 09:22:37 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:10.685 09:22:37 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:10.685 09:22:37 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:10.685 09:22:37 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:10.685 09:22:37 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:10.685 09:22:37 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.685 09:22:37 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.685 09:22:37 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:10.685 09:22:37 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:10.685 09:22:37 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:10.685 09:22:37 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:10.685 09:22:37 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:10.685 09:22:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:10.685 09:22:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 71141 00:05:10.685 09:22:37 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 71141 ']' 00:05:10.685 09:22:37 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 71141 00:05:10.685 09:22:37 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:10.685 09:22:37 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:10.685 09:22:37 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71141 00:05:10.685 09:22:37 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:10.685 killing process with pid 71141 00:05:10.685 09:22:37 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:10.685 09:22:37 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71141' 00:05:10.685 09:22:37 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 71141 00:05:10.685 09:22:37 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 71141 00:05:10.685 00:05:10.685 real 0m5.341s 00:05:10.685 user 0m4.822s 00:05:10.685 sys 0m0.419s 00:05:10.685 09:22:38 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.685 ************************************ 00:05:10.685 END TEST skip_rpc 00:05:10.685 ************************************ 00:05:10.685 09:22:38 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.685 09:22:38 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:10.685 09:22:38 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.685 09:22:38 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.685 09:22:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.685 ************************************ 00:05:10.685 START TEST skip_rpc_with_json 00:05:10.685 ************************************ 00:05:10.685 09:22:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:10.685 09:22:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:10.685 09:22:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=71228 00:05:10.685 09:22:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:10.685 09:22:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 71228 00:05:10.685 09:22:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 71228 ']' 00:05:10.685 09:22:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.686 09:22:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:10.686 09:22:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:10.686 09:22:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.686 09:22:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:10.686 09:22:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:10.686 [2024-11-29 09:22:38.191501] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:05:10.686 [2024-11-29 09:22:38.191619] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71228 ] 00:05:10.686 [2024-11-29 09:22:38.319415] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:10.686 [2024-11-29 09:22:38.349517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.686 [2024-11-29 09:22:38.368685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.629 09:22:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.629 09:22:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:11.629 09:22:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:11.629 09:22:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.629 09:22:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:11.629 [2024-11-29 09:22:39.043500] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:11.629 request: 00:05:11.629 { 00:05:11.629 "trtype": "tcp", 00:05:11.629 "method": "nvmf_get_transports", 00:05:11.629 "req_id": 1 00:05:11.629 } 00:05:11.629 Got JSON-RPC error response 00:05:11.629 response: 00:05:11.629 { 00:05:11.629 "code": -19, 00:05:11.629 "message": "No such device" 00:05:11.629 } 00:05:11.629 09:22:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:11.629 09:22:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:11.629 09:22:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.630 09:22:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:11.630 [2024-11-29 09:22:39.055614] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:11.630 09:22:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.630 09:22:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:11.630 09:22:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.630 09:22:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:11.630 09:22:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.630 09:22:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:11.630 { 00:05:11.630 "subsystems": [ 00:05:11.630 { 00:05:11.630 "subsystem": "fsdev", 00:05:11.630 "config": [ 00:05:11.630 { 00:05:11.630 "method": "fsdev_set_opts", 00:05:11.630 "params": { 00:05:11.630 "fsdev_io_pool_size": 65535, 00:05:11.630 "fsdev_io_cache_size": 256 00:05:11.630 } 00:05:11.630 } 00:05:11.630 ] 00:05:11.630 }, 00:05:11.630 { 00:05:11.630 "subsystem": "keyring", 00:05:11.630 "config": [] 00:05:11.630 }, 00:05:11.630 { 00:05:11.630 "subsystem": "iobuf", 00:05:11.630 "config": [ 00:05:11.630 { 00:05:11.630 "method": "iobuf_set_options", 00:05:11.630 "params": { 00:05:11.630 "small_pool_count": 8192, 00:05:11.630 "large_pool_count": 1024, 00:05:11.630 "small_bufsize": 8192, 00:05:11.630 "large_bufsize": 135168, 00:05:11.630 "enable_numa": false 00:05:11.630 } 00:05:11.630 } 00:05:11.630 ] 00:05:11.630 }, 00:05:11.630 { 00:05:11.630 "subsystem": "sock", 00:05:11.630 "config": [ 00:05:11.630 { 00:05:11.630 "method": "sock_set_default_impl", 00:05:11.630 "params": { 00:05:11.630 "impl_name": "posix" 00:05:11.630 } 00:05:11.630 }, 00:05:11.630 { 00:05:11.630 "method": "sock_impl_set_options", 00:05:11.630 "params": { 00:05:11.630 "impl_name": "ssl", 00:05:11.630 "recv_buf_size": 4096, 00:05:11.630 "send_buf_size": 4096, 00:05:11.630 "enable_recv_pipe": true, 00:05:11.630 "enable_quickack": false, 00:05:11.630 "enable_placement_id": 0, 00:05:11.630 "enable_zerocopy_send_server": true, 00:05:11.630 "enable_zerocopy_send_client": false, 00:05:11.630 "zerocopy_threshold": 0, 00:05:11.630 "tls_version": 0, 00:05:11.630 "enable_ktls": false 00:05:11.630 } 00:05:11.630 }, 00:05:11.630 { 00:05:11.630 "method": "sock_impl_set_options", 00:05:11.630 "params": { 00:05:11.630 "impl_name": "posix", 00:05:11.630 "recv_buf_size": 2097152, 00:05:11.630 "send_buf_size": 2097152, 00:05:11.630 "enable_recv_pipe": true, 00:05:11.630 "enable_quickack": false, 00:05:11.630 "enable_placement_id": 0, 00:05:11.630 "enable_zerocopy_send_server": true, 00:05:11.630 "enable_zerocopy_send_client": false, 00:05:11.630 "zerocopy_threshold": 0, 00:05:11.630 "tls_version": 0, 00:05:11.630 "enable_ktls": false 00:05:11.630 } 00:05:11.630 } 00:05:11.630 ] 00:05:11.630 }, 00:05:11.630 { 00:05:11.630 "subsystem": "vmd", 00:05:11.630 "config": [] 00:05:11.630 }, 00:05:11.630 { 00:05:11.630 "subsystem": "accel", 00:05:11.630 "config": [ 00:05:11.630 { 00:05:11.630 "method": "accel_set_options", 00:05:11.630 "params": { 00:05:11.630 "small_cache_size": 128, 00:05:11.630 "large_cache_size": 16, 00:05:11.630 "task_count": 2048, 00:05:11.630 "sequence_count": 2048, 00:05:11.630 "buf_count": 2048 00:05:11.630 } 00:05:11.630 } 00:05:11.630 ] 00:05:11.630 }, 00:05:11.630 { 00:05:11.630 "subsystem": "bdev", 00:05:11.630 "config": [ 00:05:11.630 { 00:05:11.630 "method": "bdev_set_options", 00:05:11.630 "params": { 00:05:11.630 "bdev_io_pool_size": 65535, 00:05:11.630 "bdev_io_cache_size": 256, 00:05:11.630 "bdev_auto_examine": true, 00:05:11.630 "iobuf_small_cache_size": 128, 00:05:11.630 "iobuf_large_cache_size": 16 00:05:11.630 } 00:05:11.630 }, 00:05:11.630 { 00:05:11.630 "method": "bdev_raid_set_options", 00:05:11.630 "params": { 00:05:11.630 "process_window_size_kb": 1024, 00:05:11.630 "process_max_bandwidth_mb_sec": 0 00:05:11.630 } 00:05:11.630 }, 00:05:11.630 { 00:05:11.630 "method": "bdev_iscsi_set_options", 00:05:11.630 "params": { 00:05:11.630 "timeout_sec": 30 00:05:11.630 } 00:05:11.630 }, 00:05:11.630 { 00:05:11.630 "method": "bdev_nvme_set_options", 00:05:11.630 "params": { 00:05:11.630 "action_on_timeout": "none", 00:05:11.630 "timeout_us": 0, 00:05:11.630 "timeout_admin_us": 0, 00:05:11.630 "keep_alive_timeout_ms": 10000, 00:05:11.630 "arbitration_burst": 0, 00:05:11.630 "low_priority_weight": 0, 00:05:11.630 "medium_priority_weight": 0, 00:05:11.630 "high_priority_weight": 0, 00:05:11.630 "nvme_adminq_poll_period_us": 10000, 00:05:11.630 "nvme_ioq_poll_period_us": 0, 00:05:11.630 "io_queue_requests": 0, 00:05:11.630 "delay_cmd_submit": true, 00:05:11.630 "transport_retry_count": 4, 00:05:11.630 "bdev_retry_count": 3, 00:05:11.630 "transport_ack_timeout": 0, 00:05:11.630 "ctrlr_loss_timeout_sec": 0, 00:05:11.630 "reconnect_delay_sec": 0, 00:05:11.630 "fast_io_fail_timeout_sec": 0, 00:05:11.630 "disable_auto_failback": false, 00:05:11.630 "generate_uuids": false, 00:05:11.630 "transport_tos": 0, 00:05:11.630 "nvme_error_stat": false, 00:05:11.630 "rdma_srq_size": 0, 00:05:11.630 "io_path_stat": false, 00:05:11.630 "allow_accel_sequence": false, 00:05:11.630 "rdma_max_cq_size": 0, 00:05:11.630 "rdma_cm_event_timeout_ms": 0, 00:05:11.630 "dhchap_digests": [ 00:05:11.630 "sha256", 00:05:11.630 "sha384", 00:05:11.630 "sha512" 00:05:11.630 ], 00:05:11.630 "dhchap_dhgroups": [ 00:05:11.630 "null", 00:05:11.630 "ffdhe2048", 00:05:11.630 "ffdhe3072", 00:05:11.630 "ffdhe4096", 00:05:11.630 "ffdhe6144", 00:05:11.630 "ffdhe8192" 00:05:11.630 ] 00:05:11.630 } 00:05:11.630 }, 00:05:11.631 { 00:05:11.631 "method": "bdev_nvme_set_hotplug", 00:05:11.631 "params": { 00:05:11.631 "period_us": 100000, 00:05:11.631 "enable": false 00:05:11.631 } 00:05:11.631 }, 00:05:11.631 { 00:05:11.631 "method": "bdev_wait_for_examine" 00:05:11.631 } 00:05:11.631 ] 00:05:11.631 }, 00:05:11.631 { 00:05:11.631 "subsystem": "scsi", 00:05:11.631 "config": null 00:05:11.631 }, 00:05:11.631 { 00:05:11.631 "subsystem": "scheduler", 00:05:11.631 "config": [ 00:05:11.631 { 00:05:11.631 "method": "framework_set_scheduler", 00:05:11.631 "params": { 00:05:11.631 "name": "static" 00:05:11.631 } 00:05:11.631 } 00:05:11.631 ] 00:05:11.631 }, 00:05:11.631 { 00:05:11.631 "subsystem": "vhost_scsi", 00:05:11.631 "config": [] 00:05:11.631 }, 00:05:11.631 { 00:05:11.631 "subsystem": "vhost_blk", 00:05:11.631 "config": [] 00:05:11.631 }, 00:05:11.631 { 00:05:11.631 "subsystem": "ublk", 00:05:11.631 "config": [] 00:05:11.631 }, 00:05:11.631 { 00:05:11.631 "subsystem": "nbd", 00:05:11.631 "config": [] 00:05:11.631 }, 00:05:11.631 { 00:05:11.631 "subsystem": "nvmf", 00:05:11.631 "config": [ 00:05:11.631 { 00:05:11.631 "method": "nvmf_set_config", 00:05:11.631 "params": { 00:05:11.631 "discovery_filter": "match_any", 00:05:11.631 "admin_cmd_passthru": { 00:05:11.631 "identify_ctrlr": false 00:05:11.631 }, 00:05:11.631 "dhchap_digests": [ 00:05:11.631 "sha256", 00:05:11.631 "sha384", 00:05:11.631 "sha512" 00:05:11.631 ], 00:05:11.631 "dhchap_dhgroups": [ 00:05:11.631 "null", 00:05:11.631 "ffdhe2048", 00:05:11.631 "ffdhe3072", 00:05:11.631 "ffdhe4096", 00:05:11.631 "ffdhe6144", 00:05:11.631 "ffdhe8192" 00:05:11.631 ] 00:05:11.631 } 00:05:11.631 }, 00:05:11.631 { 00:05:11.631 "method": "nvmf_set_max_subsystems", 00:05:11.631 "params": { 00:05:11.631 "max_subsystems": 1024 00:05:11.631 } 00:05:11.631 }, 00:05:11.631 { 00:05:11.631 "method": "nvmf_set_crdt", 00:05:11.631 "params": { 00:05:11.631 "crdt1": 0, 00:05:11.631 "crdt2": 0, 00:05:11.631 "crdt3": 0 00:05:11.631 } 00:05:11.631 }, 00:05:11.631 { 00:05:11.631 "method": "nvmf_create_transport", 00:05:11.631 "params": { 00:05:11.631 "trtype": "TCP", 00:05:11.631 "max_queue_depth": 128, 00:05:11.631 "max_io_qpairs_per_ctrlr": 127, 00:05:11.631 "in_capsule_data_size": 4096, 00:05:11.631 "max_io_size": 131072, 00:05:11.631 "io_unit_size": 131072, 00:05:11.631 "max_aq_depth": 128, 00:05:11.631 "num_shared_buffers": 511, 00:05:11.631 "buf_cache_size": 4294967295, 00:05:11.631 "dif_insert_or_strip": false, 00:05:11.631 "zcopy": false, 00:05:11.631 "c2h_success": true, 00:05:11.631 "sock_priority": 0, 00:05:11.631 "abort_timeout_sec": 1, 00:05:11.631 "ack_timeout": 0, 00:05:11.631 "data_wr_pool_size": 0 00:05:11.631 } 00:05:11.631 } 00:05:11.631 ] 00:05:11.631 }, 00:05:11.631 { 00:05:11.631 "subsystem": "iscsi", 00:05:11.631 "config": [ 00:05:11.631 { 00:05:11.631 "method": "iscsi_set_options", 00:05:11.631 "params": { 00:05:11.631 "node_base": "iqn.2016-06.io.spdk", 00:05:11.631 "max_sessions": 128, 00:05:11.631 "max_connections_per_session": 2, 00:05:11.631 "max_queue_depth": 64, 00:05:11.631 "default_time2wait": 2, 00:05:11.631 "default_time2retain": 20, 00:05:11.631 "first_burst_length": 8192, 00:05:11.631 "immediate_data": true, 00:05:11.631 "allow_duplicated_isid": false, 00:05:11.631 "error_recovery_level": 0, 00:05:11.631 "nop_timeout": 60, 00:05:11.631 "nop_in_interval": 30, 00:05:11.631 "disable_chap": false, 00:05:11.631 "require_chap": false, 00:05:11.631 "mutual_chap": false, 00:05:11.631 "chap_group": 0, 00:05:11.631 "max_large_datain_per_connection": 64, 00:05:11.631 "max_r2t_per_connection": 4, 00:05:11.631 "pdu_pool_size": 36864, 00:05:11.631 "immediate_data_pool_size": 16384, 00:05:11.631 "data_out_pool_size": 2048 00:05:11.631 } 00:05:11.631 } 00:05:11.631 ] 00:05:11.631 } 00:05:11.631 ] 00:05:11.631 } 00:05:11.631 09:22:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:11.631 09:22:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 71228 00:05:11.631 09:22:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 71228 ']' 00:05:11.631 09:22:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 71228 00:05:11.631 09:22:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:11.631 09:22:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:11.631 09:22:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71228 00:05:11.631 09:22:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:11.631 killing process with pid 71228 00:05:11.631 09:22:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:11.631 09:22:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71228' 00:05:11.631 09:22:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 71228 00:05:11.631 09:22:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 71228 00:05:11.893 09:22:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=71251 00:05:11.893 09:22:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:11.893 09:22:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:17.184 09:22:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 71251 00:05:17.184 09:22:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 71251 ']' 00:05:17.184 09:22:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 71251 00:05:17.184 09:22:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:17.184 09:22:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:17.184 09:22:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71251 00:05:17.184 09:22:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:17.184 09:22:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:17.184 09:22:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71251' 00:05:17.184 killing process with pid 71251 00:05:17.184 09:22:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 71251 00:05:17.184 09:22:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 71251 00:05:17.184 09:22:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:17.184 09:22:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:17.184 ************************************ 00:05:17.184 00:05:17.184 real 0m6.652s 00:05:17.184 user 0m6.256s 00:05:17.184 sys 0m0.634s 00:05:17.184 09:22:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.184 09:22:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:17.184 END TEST skip_rpc_with_json 00:05:17.184 ************************************ 00:05:17.184 09:22:44 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:17.184 09:22:44 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.184 09:22:44 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.184 09:22:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.184 ************************************ 00:05:17.184 START TEST skip_rpc_with_delay 00:05:17.184 ************************************ 00:05:17.184 09:22:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:17.184 09:22:44 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:17.184 09:22:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:17.184 09:22:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:17.184 09:22:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:17.184 09:22:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:17.184 09:22:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:17.184 09:22:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:17.184 09:22:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:17.184 09:22:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:17.184 09:22:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:17.184 09:22:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:17.184 09:22:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:17.460 [2024-11-29 09:22:44.911534] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:17.460 09:22:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:17.460 09:22:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:17.460 09:22:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:17.460 09:22:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:17.460 00:05:17.460 real 0m0.137s 00:05:17.460 user 0m0.078s 00:05:17.460 sys 0m0.057s 00:05:17.460 09:22:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.460 ************************************ 00:05:17.460 END TEST skip_rpc_with_delay 00:05:17.460 ************************************ 00:05:17.460 09:22:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:17.460 09:22:45 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:17.460 09:22:45 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:17.460 09:22:45 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:17.460 09:22:45 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.460 09:22:45 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.460 09:22:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.460 ************************************ 00:05:17.460 START TEST exit_on_failed_rpc_init 00:05:17.460 ************************************ 00:05:17.460 09:22:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:17.460 09:22:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=71363 00:05:17.460 09:22:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 71363 00:05:17.460 09:22:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 71363 ']' 00:05:17.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.461 09:22:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.461 09:22:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:17.461 09:22:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.461 09:22:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.461 09:22:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.461 09:22:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:17.461 [2024-11-29 09:22:45.112154] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:05:17.461 [2024-11-29 09:22:45.112305] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71363 ] 00:05:17.719 [2024-11-29 09:22:45.250515] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:17.719 [2024-11-29 09:22:45.274142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.719 [2024-11-29 09:22:45.296812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.286 09:22:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.286 09:22:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:18.286 09:22:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:18.286 09:22:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:18.286 09:22:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:18.286 09:22:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:18.287 09:22:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:18.287 09:22:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:18.287 09:22:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:18.287 09:22:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:18.287 09:22:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:18.287 09:22:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:18.287 09:22:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:18.287 09:22:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:18.287 09:22:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:18.545 [2024-11-29 09:22:46.021482] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:05:18.545 [2024-11-29 09:22:46.021607] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71381 ] 00:05:18.545 [2024-11-29 09:22:46.152073] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:18.545 [2024-11-29 09:22:46.174540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.545 [2024-11-29 09:22:46.192715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.545 [2024-11-29 09:22:46.192790] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:18.545 [2024-11-29 09:22:46.192802] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:18.545 [2024-11-29 09:22:46.192813] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:18.545 09:22:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:18.545 09:22:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:18.545 09:22:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:18.545 09:22:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:18.545 09:22:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:18.545 09:22:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:18.545 09:22:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:18.545 09:22:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 71363 00:05:18.545 09:22:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 71363 ']' 00:05:18.545 09:22:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 71363 00:05:18.545 09:22:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:18.545 09:22:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:18.545 09:22:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71363 00:05:18.805 09:22:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:18.805 killing process with pid 71363 00:05:18.806 09:22:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:18.806 09:22:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71363' 00:05:18.806 09:22:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 71363 00:05:18.806 09:22:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 71363 00:05:18.806 00:05:18.806 real 0m1.483s 00:05:18.806 user 0m1.599s 00:05:18.806 sys 0m0.415s 00:05:18.806 09:22:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.806 09:22:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:18.806 ************************************ 00:05:18.806 END TEST exit_on_failed_rpc_init 00:05:18.806 ************************************ 00:05:19.067 09:22:46 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:19.067 00:05:19.067 real 0m14.003s 00:05:19.067 user 0m12.907s 00:05:19.067 sys 0m1.708s 00:05:19.067 09:22:46 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.067 09:22:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.067 ************************************ 00:05:19.067 END TEST skip_rpc 00:05:19.067 ************************************ 00:05:19.067 09:22:46 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:19.067 09:22:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.067 09:22:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.067 09:22:46 -- common/autotest_common.sh@10 -- # set +x 00:05:19.067 ************************************ 00:05:19.067 START TEST rpc_client 00:05:19.067 ************************************ 00:05:19.067 09:22:46 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:19.067 * Looking for test storage... 00:05:19.067 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:19.067 09:22:46 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:19.067 09:22:46 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:19.067 09:22:46 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:19.067 09:22:46 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:19.067 09:22:46 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.067 09:22:46 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.067 09:22:46 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.067 09:22:46 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.067 09:22:46 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.067 09:22:46 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.067 09:22:46 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.067 09:22:46 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.067 09:22:46 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.067 09:22:46 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.067 09:22:46 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.067 09:22:46 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:19.067 09:22:46 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:19.067 09:22:46 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.067 09:22:46 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.067 09:22:46 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:19.067 09:22:46 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:19.067 09:22:46 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.067 09:22:46 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:19.067 09:22:46 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.067 09:22:46 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:19.067 09:22:46 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:19.067 09:22:46 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.067 09:22:46 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:19.067 09:22:46 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.067 09:22:46 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.067 09:22:46 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.067 09:22:46 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:19.067 09:22:46 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.067 09:22:46 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:19.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.067 --rc genhtml_branch_coverage=1 00:05:19.067 --rc genhtml_function_coverage=1 00:05:19.067 --rc genhtml_legend=1 00:05:19.067 --rc geninfo_all_blocks=1 00:05:19.068 --rc geninfo_unexecuted_blocks=1 00:05:19.068 00:05:19.068 ' 00:05:19.068 09:22:46 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:19.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.068 --rc genhtml_branch_coverage=1 00:05:19.068 --rc genhtml_function_coverage=1 00:05:19.068 --rc genhtml_legend=1 00:05:19.068 --rc geninfo_all_blocks=1 00:05:19.068 --rc geninfo_unexecuted_blocks=1 00:05:19.068 00:05:19.068 ' 00:05:19.068 09:22:46 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:19.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.068 --rc genhtml_branch_coverage=1 00:05:19.068 --rc genhtml_function_coverage=1 00:05:19.068 --rc genhtml_legend=1 00:05:19.068 --rc geninfo_all_blocks=1 00:05:19.068 --rc geninfo_unexecuted_blocks=1 00:05:19.068 00:05:19.068 ' 00:05:19.068 09:22:46 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:19.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.068 --rc genhtml_branch_coverage=1 00:05:19.068 --rc genhtml_function_coverage=1 00:05:19.068 --rc genhtml_legend=1 00:05:19.068 --rc geninfo_all_blocks=1 00:05:19.068 --rc geninfo_unexecuted_blocks=1 00:05:19.068 00:05:19.068 ' 00:05:19.068 09:22:46 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:19.068 OK 00:05:19.329 09:22:46 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:19.329 ************************************ 00:05:19.329 END TEST rpc_client 00:05:19.329 ************************************ 00:05:19.329 00:05:19.329 real 0m0.197s 00:05:19.329 user 0m0.104s 00:05:19.329 sys 0m0.098s 00:05:19.329 09:22:46 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.329 09:22:46 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:19.329 09:22:46 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:19.329 09:22:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.329 09:22:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.329 09:22:46 -- common/autotest_common.sh@10 -- # set +x 00:05:19.329 ************************************ 00:05:19.329 START TEST json_config 00:05:19.329 ************************************ 00:05:19.329 09:22:46 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:19.329 09:22:46 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:19.329 09:22:46 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:19.329 09:22:46 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:19.329 09:22:46 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:19.329 09:22:46 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.329 09:22:46 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.329 09:22:46 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.329 09:22:46 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.329 09:22:46 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.329 09:22:46 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.329 09:22:46 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.329 09:22:46 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.329 09:22:46 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.329 09:22:46 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.329 09:22:46 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.329 09:22:46 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:19.329 09:22:46 json_config -- scripts/common.sh@345 -- # : 1 00:05:19.329 09:22:46 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.329 09:22:46 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.329 09:22:46 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:19.329 09:22:46 json_config -- scripts/common.sh@353 -- # local d=1 00:05:19.329 09:22:46 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.329 09:22:46 json_config -- scripts/common.sh@355 -- # echo 1 00:05:19.329 09:22:46 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.329 09:22:46 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:19.329 09:22:46 json_config -- scripts/common.sh@353 -- # local d=2 00:05:19.329 09:22:46 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.329 09:22:46 json_config -- scripts/common.sh@355 -- # echo 2 00:05:19.329 09:22:46 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.329 09:22:46 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.329 09:22:46 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.329 09:22:46 json_config -- scripts/common.sh@368 -- # return 0 00:05:19.329 09:22:46 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.329 09:22:46 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:19.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.329 --rc genhtml_branch_coverage=1 00:05:19.329 --rc genhtml_function_coverage=1 00:05:19.329 --rc genhtml_legend=1 00:05:19.329 --rc geninfo_all_blocks=1 00:05:19.329 --rc geninfo_unexecuted_blocks=1 00:05:19.329 00:05:19.329 ' 00:05:19.329 09:22:46 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:19.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.329 --rc genhtml_branch_coverage=1 00:05:19.329 --rc genhtml_function_coverage=1 00:05:19.329 --rc genhtml_legend=1 00:05:19.329 --rc geninfo_all_blocks=1 00:05:19.329 --rc geninfo_unexecuted_blocks=1 00:05:19.329 00:05:19.329 ' 00:05:19.329 09:22:46 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:19.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.329 --rc genhtml_branch_coverage=1 00:05:19.329 --rc genhtml_function_coverage=1 00:05:19.329 --rc genhtml_legend=1 00:05:19.329 --rc geninfo_all_blocks=1 00:05:19.329 --rc geninfo_unexecuted_blocks=1 00:05:19.329 00:05:19.329 ' 00:05:19.329 09:22:46 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:19.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.329 --rc genhtml_branch_coverage=1 00:05:19.329 --rc genhtml_function_coverage=1 00:05:19.329 --rc genhtml_legend=1 00:05:19.329 --rc geninfo_all_blocks=1 00:05:19.329 --rc geninfo_unexecuted_blocks=1 00:05:19.329 00:05:19.329 ' 00:05:19.329 09:22:46 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:19.329 09:22:46 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:19.329 09:22:46 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:19.329 09:22:46 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:19.329 09:22:46 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:19.329 09:22:46 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:19.329 09:22:46 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:19.329 09:22:46 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:19.329 09:22:46 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:19.329 09:22:46 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:19.330 09:22:46 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:19.330 09:22:46 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:19.330 09:22:46 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:67e40bf6-6f06-4caa-b4ab-dc6264607e2b 00:05:19.330 09:22:46 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=67e40bf6-6f06-4caa-b4ab-dc6264607e2b 00:05:19.330 09:22:46 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:19.330 09:22:46 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:19.330 09:22:46 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:19.330 09:22:46 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:19.330 09:22:46 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:19.330 09:22:46 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:19.330 09:22:46 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:19.330 09:22:46 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:19.330 09:22:46 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:19.330 09:22:46 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.330 09:22:46 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.330 09:22:46 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.330 09:22:46 json_config -- paths/export.sh@5 -- # export PATH 00:05:19.330 09:22:46 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.330 09:22:46 json_config -- nvmf/common.sh@51 -- # : 0 00:05:19.330 09:22:46 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:19.330 09:22:46 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:19.330 09:22:46 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:19.330 09:22:46 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:19.330 09:22:46 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:19.330 09:22:46 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:19.330 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:19.330 09:22:46 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:19.330 09:22:46 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:19.330 09:22:46 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:19.330 09:22:46 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:19.330 09:22:46 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:19.330 09:22:46 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:19.330 09:22:46 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:19.330 09:22:46 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:19.330 09:22:46 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:19.330 WARNING: No tests are enabled so not running JSON configuration tests 00:05:19.330 09:22:46 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:19.330 00:05:19.330 real 0m0.138s 00:05:19.330 user 0m0.080s 00:05:19.330 sys 0m0.060s 00:05:19.330 09:22:46 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.330 09:22:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.330 ************************************ 00:05:19.330 END TEST json_config 00:05:19.330 ************************************ 00:05:19.330 09:22:47 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:19.330 09:22:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.330 09:22:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.330 09:22:47 -- common/autotest_common.sh@10 -- # set +x 00:05:19.330 ************************************ 00:05:19.330 START TEST json_config_extra_key 00:05:19.330 ************************************ 00:05:19.330 09:22:47 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:19.592 09:22:47 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:19.592 09:22:47 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:19.592 09:22:47 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:19.592 09:22:47 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:19.592 09:22:47 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.592 09:22:47 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.592 09:22:47 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.592 09:22:47 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.592 09:22:47 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.592 09:22:47 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.592 09:22:47 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.592 09:22:47 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.592 09:22:47 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.592 09:22:47 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.592 09:22:47 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.592 09:22:47 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:19.592 09:22:47 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:19.592 09:22:47 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.593 09:22:47 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.593 09:22:47 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:19.593 09:22:47 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:19.593 09:22:47 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.593 09:22:47 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:19.593 09:22:47 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.593 09:22:47 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:19.593 09:22:47 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:19.593 09:22:47 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.593 09:22:47 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:19.593 09:22:47 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.593 09:22:47 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.593 09:22:47 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.593 09:22:47 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:19.593 09:22:47 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.593 09:22:47 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:19.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.593 --rc genhtml_branch_coverage=1 00:05:19.593 --rc genhtml_function_coverage=1 00:05:19.593 --rc genhtml_legend=1 00:05:19.593 --rc geninfo_all_blocks=1 00:05:19.593 --rc geninfo_unexecuted_blocks=1 00:05:19.593 00:05:19.593 ' 00:05:19.593 09:22:47 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:19.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.593 --rc genhtml_branch_coverage=1 00:05:19.593 --rc genhtml_function_coverage=1 00:05:19.593 --rc genhtml_legend=1 00:05:19.593 --rc geninfo_all_blocks=1 00:05:19.593 --rc geninfo_unexecuted_blocks=1 00:05:19.593 00:05:19.593 ' 00:05:19.593 09:22:47 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:19.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.593 --rc genhtml_branch_coverage=1 00:05:19.593 --rc genhtml_function_coverage=1 00:05:19.593 --rc genhtml_legend=1 00:05:19.593 --rc geninfo_all_blocks=1 00:05:19.593 --rc geninfo_unexecuted_blocks=1 00:05:19.593 00:05:19.593 ' 00:05:19.593 09:22:47 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:19.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.593 --rc genhtml_branch_coverage=1 00:05:19.593 --rc genhtml_function_coverage=1 00:05:19.593 --rc genhtml_legend=1 00:05:19.593 --rc geninfo_all_blocks=1 00:05:19.593 --rc geninfo_unexecuted_blocks=1 00:05:19.593 00:05:19.593 ' 00:05:19.593 09:22:47 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:19.593 09:22:47 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:19.593 09:22:47 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:19.593 09:22:47 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:19.593 09:22:47 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:19.593 09:22:47 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:19.593 09:22:47 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:19.593 09:22:47 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:19.593 09:22:47 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:19.593 09:22:47 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:19.593 09:22:47 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:19.593 09:22:47 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:19.593 09:22:47 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:67e40bf6-6f06-4caa-b4ab-dc6264607e2b 00:05:19.593 09:22:47 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=67e40bf6-6f06-4caa-b4ab-dc6264607e2b 00:05:19.593 09:22:47 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:19.593 09:22:47 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:19.593 09:22:47 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:19.593 09:22:47 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:19.593 09:22:47 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:19.593 09:22:47 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:19.593 09:22:47 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:19.593 09:22:47 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:19.593 09:22:47 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:19.593 09:22:47 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.593 09:22:47 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.593 09:22:47 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.593 09:22:47 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:19.593 09:22:47 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.593 09:22:47 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:19.593 09:22:47 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:19.593 09:22:47 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:19.593 09:22:47 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:19.593 09:22:47 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:19.593 09:22:47 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:19.593 09:22:47 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:19.593 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:19.593 09:22:47 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:19.593 09:22:47 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:19.593 09:22:47 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:19.593 09:22:47 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:19.593 09:22:47 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:19.593 09:22:47 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:19.593 09:22:47 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:19.593 09:22:47 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:19.593 09:22:47 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:19.593 09:22:47 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:19.593 09:22:47 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:19.593 09:22:47 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:19.593 09:22:47 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:19.593 INFO: launching applications... 00:05:19.593 09:22:47 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:19.593 09:22:47 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:19.593 09:22:47 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:19.593 09:22:47 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:19.593 09:22:47 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:19.593 09:22:47 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:19.593 09:22:47 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:19.593 09:22:47 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:19.593 09:22:47 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:19.593 Waiting for target to run... 00:05:19.593 09:22:47 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=71558 00:05:19.593 09:22:47 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:19.593 09:22:47 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 71558 /var/tmp/spdk_tgt.sock 00:05:19.593 09:22:47 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 71558 ']' 00:05:19.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:19.593 09:22:47 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:19.593 09:22:47 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:19.593 09:22:47 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:19.593 09:22:47 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:19.593 09:22:47 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:19.593 09:22:47 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:19.593 [2024-11-29 09:22:47.226389] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:05:19.594 [2024-11-29 09:22:47.226507] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71558 ] 00:05:19.855 [2024-11-29 09:22:47.523180] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:19.855 [2024-11-29 09:22:47.553909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.855 [2024-11-29 09:22:47.565620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.427 00:05:20.427 09:22:48 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.427 09:22:48 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:20.427 09:22:48 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:20.427 INFO: shutting down applications... 00:05:20.427 09:22:48 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:20.427 09:22:48 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:20.427 09:22:48 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:20.427 09:22:48 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:20.427 09:22:48 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 71558 ]] 00:05:20.427 09:22:48 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 71558 00:05:20.427 09:22:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:20.427 09:22:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:20.428 09:22:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71558 00:05:20.428 09:22:48 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:20.999 09:22:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:20.999 09:22:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:20.999 09:22:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71558 00:05:20.999 09:22:48 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:20.999 09:22:48 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:20.999 09:22:48 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:20.999 SPDK target shutdown done 00:05:20.999 Success 00:05:20.999 09:22:48 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:20.999 09:22:48 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:20.999 00:05:20.999 real 0m1.551s 00:05:20.999 user 0m1.257s 00:05:20.999 sys 0m0.349s 00:05:20.999 09:22:48 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.999 09:22:48 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:20.999 ************************************ 00:05:20.999 END TEST json_config_extra_key 00:05:20.999 ************************************ 00:05:20.999 09:22:48 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:20.999 09:22:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.999 09:22:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.999 09:22:48 -- common/autotest_common.sh@10 -- # set +x 00:05:20.999 ************************************ 00:05:20.999 START TEST alias_rpc 00:05:20.999 ************************************ 00:05:20.999 09:22:48 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:20.999 * Looking for test storage... 00:05:20.999 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:20.999 09:22:48 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:20.999 09:22:48 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:20.999 09:22:48 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:21.261 09:22:48 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:21.261 09:22:48 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:21.261 09:22:48 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:21.261 09:22:48 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:21.261 09:22:48 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.261 09:22:48 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:21.261 09:22:48 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:21.261 09:22:48 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:21.261 09:22:48 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:21.261 09:22:48 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:21.261 09:22:48 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:21.261 09:22:48 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:21.261 09:22:48 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:21.261 09:22:48 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:21.261 09:22:48 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:21.261 09:22:48 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.261 09:22:48 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:21.261 09:22:48 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:21.261 09:22:48 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.261 09:22:48 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:21.261 09:22:48 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:21.261 09:22:48 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:21.261 09:22:48 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:21.261 09:22:48 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.261 09:22:48 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:21.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.261 09:22:48 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:21.261 09:22:48 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:21.261 09:22:48 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:21.261 09:22:48 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:21.262 09:22:48 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.262 09:22:48 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:21.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.262 --rc genhtml_branch_coverage=1 00:05:21.262 --rc genhtml_function_coverage=1 00:05:21.262 --rc genhtml_legend=1 00:05:21.262 --rc geninfo_all_blocks=1 00:05:21.262 --rc geninfo_unexecuted_blocks=1 00:05:21.262 00:05:21.262 ' 00:05:21.262 09:22:48 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:21.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.262 --rc genhtml_branch_coverage=1 00:05:21.262 --rc genhtml_function_coverage=1 00:05:21.262 --rc genhtml_legend=1 00:05:21.262 --rc geninfo_all_blocks=1 00:05:21.262 --rc geninfo_unexecuted_blocks=1 00:05:21.262 00:05:21.262 ' 00:05:21.262 09:22:48 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:21.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.262 --rc genhtml_branch_coverage=1 00:05:21.262 --rc genhtml_function_coverage=1 00:05:21.262 --rc genhtml_legend=1 00:05:21.262 --rc geninfo_all_blocks=1 00:05:21.262 --rc geninfo_unexecuted_blocks=1 00:05:21.262 00:05:21.262 ' 00:05:21.262 09:22:48 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:21.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.262 --rc genhtml_branch_coverage=1 00:05:21.262 --rc genhtml_function_coverage=1 00:05:21.262 --rc genhtml_legend=1 00:05:21.262 --rc geninfo_all_blocks=1 00:05:21.262 --rc geninfo_unexecuted_blocks=1 00:05:21.262 00:05:21.262 ' 00:05:21.262 09:22:48 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:21.262 09:22:48 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=71631 00:05:21.262 09:22:48 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 71631 00:05:21.262 09:22:48 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 71631 ']' 00:05:21.262 09:22:48 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.262 09:22:48 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:21.262 09:22:48 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.262 09:22:48 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:21.262 09:22:48 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.262 09:22:48 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:21.262 [2024-11-29 09:22:48.850763] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:05:21.262 [2024-11-29 09:22:48.850877] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71631 ] 00:05:21.262 [2024-11-29 09:22:48.983373] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:21.523 [2024-11-29 09:22:49.014848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.523 [2024-11-29 09:22:49.034919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.091 09:22:49 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.091 09:22:49 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:22.091 09:22:49 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:22.352 09:22:49 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 71631 00:05:22.352 09:22:49 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 71631 ']' 00:05:22.352 09:22:49 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 71631 00:05:22.352 09:22:49 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:22.352 09:22:49 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:22.352 09:22:49 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71631 00:05:22.352 killing process with pid 71631 00:05:22.352 09:22:49 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:22.352 09:22:49 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:22.352 09:22:49 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71631' 00:05:22.352 09:22:49 alias_rpc -- common/autotest_common.sh@973 -- # kill 71631 00:05:22.352 09:22:49 alias_rpc -- common/autotest_common.sh@978 -- # wait 71631 00:05:22.612 ************************************ 00:05:22.612 END TEST alias_rpc 00:05:22.612 ************************************ 00:05:22.612 00:05:22.612 real 0m1.565s 00:05:22.612 user 0m1.679s 00:05:22.612 sys 0m0.386s 00:05:22.612 09:22:50 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.612 09:22:50 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.612 09:22:50 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:22.612 09:22:50 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:22.612 09:22:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.612 09:22:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.612 09:22:50 -- common/autotest_common.sh@10 -- # set +x 00:05:22.612 ************************************ 00:05:22.612 START TEST spdkcli_tcp 00:05:22.612 ************************************ 00:05:22.612 09:22:50 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:22.612 * Looking for test storage... 00:05:22.612 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:22.612 09:22:50 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:22.612 09:22:50 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:22.612 09:22:50 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:22.874 09:22:50 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:22.874 09:22:50 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:22.874 09:22:50 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:22.874 09:22:50 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:22.874 09:22:50 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:22.874 09:22:50 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:22.874 09:22:50 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:22.874 09:22:50 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:22.874 09:22:50 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:22.874 09:22:50 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:22.874 09:22:50 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:22.874 09:22:50 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:22.874 09:22:50 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:22.874 09:22:50 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:22.874 09:22:50 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:22.874 09:22:50 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:22.874 09:22:50 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:22.874 09:22:50 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:22.874 09:22:50 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:22.874 09:22:50 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:22.874 09:22:50 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:22.874 09:22:50 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:22.874 09:22:50 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:22.874 09:22:50 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:22.874 09:22:50 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:22.874 09:22:50 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:22.874 09:22:50 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:22.874 09:22:50 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:22.874 09:22:50 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:22.874 09:22:50 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:22.874 09:22:50 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:22.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.874 --rc genhtml_branch_coverage=1 00:05:22.874 --rc genhtml_function_coverage=1 00:05:22.874 --rc genhtml_legend=1 00:05:22.874 --rc geninfo_all_blocks=1 00:05:22.874 --rc geninfo_unexecuted_blocks=1 00:05:22.874 00:05:22.874 ' 00:05:22.874 09:22:50 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:22.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.874 --rc genhtml_branch_coverage=1 00:05:22.874 --rc genhtml_function_coverage=1 00:05:22.874 --rc genhtml_legend=1 00:05:22.874 --rc geninfo_all_blocks=1 00:05:22.874 --rc geninfo_unexecuted_blocks=1 00:05:22.874 00:05:22.874 ' 00:05:22.874 09:22:50 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:22.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.874 --rc genhtml_branch_coverage=1 00:05:22.874 --rc genhtml_function_coverage=1 00:05:22.874 --rc genhtml_legend=1 00:05:22.874 --rc geninfo_all_blocks=1 00:05:22.874 --rc geninfo_unexecuted_blocks=1 00:05:22.874 00:05:22.874 ' 00:05:22.874 09:22:50 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:22.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.874 --rc genhtml_branch_coverage=1 00:05:22.874 --rc genhtml_function_coverage=1 00:05:22.874 --rc genhtml_legend=1 00:05:22.874 --rc geninfo_all_blocks=1 00:05:22.874 --rc geninfo_unexecuted_blocks=1 00:05:22.874 00:05:22.874 ' 00:05:22.874 09:22:50 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:22.874 09:22:50 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:22.874 09:22:50 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:22.874 09:22:50 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:22.874 09:22:50 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:22.874 09:22:50 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:22.874 09:22:50 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:22.874 09:22:50 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:22.874 09:22:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:22.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.874 09:22:50 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=71716 00:05:22.874 09:22:50 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 71716 00:05:22.874 09:22:50 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 71716 ']' 00:05:22.874 09:22:50 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.874 09:22:50 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:22.874 09:22:50 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:22.874 09:22:50 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.874 09:22:50 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:22.874 09:22:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:22.875 [2024-11-29 09:22:50.489480] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:05:22.875 [2024-11-29 09:22:50.489615] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71716 ] 00:05:23.136 [2024-11-29 09:22:50.623317] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:23.136 [2024-11-29 09:22:50.653845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:23.136 [2024-11-29 09:22:50.674761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.136 [2024-11-29 09:22:50.674815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.708 09:22:51 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.708 09:22:51 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:23.708 09:22:51 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:23.708 09:22:51 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=71728 00:05:23.708 09:22:51 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:23.984 [ 00:05:23.984 "bdev_malloc_delete", 00:05:23.984 "bdev_malloc_create", 00:05:23.984 "bdev_null_resize", 00:05:23.984 "bdev_null_delete", 00:05:23.984 "bdev_null_create", 00:05:23.984 "bdev_nvme_cuse_unregister", 00:05:23.984 "bdev_nvme_cuse_register", 00:05:23.984 "bdev_opal_new_user", 00:05:23.984 "bdev_opal_set_lock_state", 00:05:23.984 "bdev_opal_delete", 00:05:23.984 "bdev_opal_get_info", 00:05:23.984 "bdev_opal_create", 00:05:23.984 "bdev_nvme_opal_revert", 00:05:23.984 "bdev_nvme_opal_init", 00:05:23.984 "bdev_nvme_send_cmd", 00:05:23.984 "bdev_nvme_set_keys", 00:05:23.984 "bdev_nvme_get_path_iostat", 00:05:23.984 "bdev_nvme_get_mdns_discovery_info", 00:05:23.984 "bdev_nvme_stop_mdns_discovery", 00:05:23.984 "bdev_nvme_start_mdns_discovery", 00:05:23.984 "bdev_nvme_set_multipath_policy", 00:05:23.984 "bdev_nvme_set_preferred_path", 00:05:23.984 "bdev_nvme_get_io_paths", 00:05:23.984 "bdev_nvme_remove_error_injection", 00:05:23.984 "bdev_nvme_add_error_injection", 00:05:23.984 "bdev_nvme_get_discovery_info", 00:05:23.984 "bdev_nvme_stop_discovery", 00:05:23.984 "bdev_nvme_start_discovery", 00:05:23.984 "bdev_nvme_get_controller_health_info", 00:05:23.984 "bdev_nvme_disable_controller", 00:05:23.984 "bdev_nvme_enable_controller", 00:05:23.984 "bdev_nvme_reset_controller", 00:05:23.984 "bdev_nvme_get_transport_statistics", 00:05:23.984 "bdev_nvme_apply_firmware", 00:05:23.984 "bdev_nvme_detach_controller", 00:05:23.984 "bdev_nvme_get_controllers", 00:05:23.984 "bdev_nvme_attach_controller", 00:05:23.984 "bdev_nvme_set_hotplug", 00:05:23.984 "bdev_nvme_set_options", 00:05:23.984 "bdev_passthru_delete", 00:05:23.984 "bdev_passthru_create", 00:05:23.984 "bdev_lvol_set_parent_bdev", 00:05:23.984 "bdev_lvol_set_parent", 00:05:23.984 "bdev_lvol_check_shallow_copy", 00:05:23.984 "bdev_lvol_start_shallow_copy", 00:05:23.984 "bdev_lvol_grow_lvstore", 00:05:23.984 "bdev_lvol_get_lvols", 00:05:23.984 "bdev_lvol_get_lvstores", 00:05:23.984 "bdev_lvol_delete", 00:05:23.984 "bdev_lvol_set_read_only", 00:05:23.984 "bdev_lvol_resize", 00:05:23.984 "bdev_lvol_decouple_parent", 00:05:23.984 "bdev_lvol_inflate", 00:05:23.984 "bdev_lvol_rename", 00:05:23.984 "bdev_lvol_clone_bdev", 00:05:23.984 "bdev_lvol_clone", 00:05:23.984 "bdev_lvol_snapshot", 00:05:23.984 "bdev_lvol_create", 00:05:23.984 "bdev_lvol_delete_lvstore", 00:05:23.984 "bdev_lvol_rename_lvstore", 00:05:23.984 "bdev_lvol_create_lvstore", 00:05:23.984 "bdev_raid_set_options", 00:05:23.984 "bdev_raid_remove_base_bdev", 00:05:23.984 "bdev_raid_add_base_bdev", 00:05:23.984 "bdev_raid_delete", 00:05:23.984 "bdev_raid_create", 00:05:23.984 "bdev_raid_get_bdevs", 00:05:23.984 "bdev_error_inject_error", 00:05:23.984 "bdev_error_delete", 00:05:23.984 "bdev_error_create", 00:05:23.984 "bdev_split_delete", 00:05:23.984 "bdev_split_create", 00:05:23.984 "bdev_delay_delete", 00:05:23.984 "bdev_delay_create", 00:05:23.984 "bdev_delay_update_latency", 00:05:23.984 "bdev_zone_block_delete", 00:05:23.984 "bdev_zone_block_create", 00:05:23.984 "blobfs_create", 00:05:23.984 "blobfs_detect", 00:05:23.984 "blobfs_set_cache_size", 00:05:23.984 "bdev_xnvme_delete", 00:05:23.984 "bdev_xnvme_create", 00:05:23.984 "bdev_aio_delete", 00:05:23.984 "bdev_aio_rescan", 00:05:23.984 "bdev_aio_create", 00:05:23.984 "bdev_ftl_set_property", 00:05:23.984 "bdev_ftl_get_properties", 00:05:23.984 "bdev_ftl_get_stats", 00:05:23.984 "bdev_ftl_unmap", 00:05:23.984 "bdev_ftl_unload", 00:05:23.984 "bdev_ftl_delete", 00:05:23.984 "bdev_ftl_load", 00:05:23.984 "bdev_ftl_create", 00:05:23.984 "bdev_virtio_attach_controller", 00:05:23.984 "bdev_virtio_scsi_get_devices", 00:05:23.984 "bdev_virtio_detach_controller", 00:05:23.984 "bdev_virtio_blk_set_hotplug", 00:05:23.984 "bdev_iscsi_delete", 00:05:23.984 "bdev_iscsi_create", 00:05:23.984 "bdev_iscsi_set_options", 00:05:23.984 "accel_error_inject_error", 00:05:23.984 "ioat_scan_accel_module", 00:05:23.984 "dsa_scan_accel_module", 00:05:23.984 "iaa_scan_accel_module", 00:05:23.984 "keyring_file_remove_key", 00:05:23.984 "keyring_file_add_key", 00:05:23.984 "keyring_linux_set_options", 00:05:23.984 "fsdev_aio_delete", 00:05:23.984 "fsdev_aio_create", 00:05:23.984 "iscsi_get_histogram", 00:05:23.984 "iscsi_enable_histogram", 00:05:23.984 "iscsi_set_options", 00:05:23.984 "iscsi_get_auth_groups", 00:05:23.985 "iscsi_auth_group_remove_secret", 00:05:23.985 "iscsi_auth_group_add_secret", 00:05:23.985 "iscsi_delete_auth_group", 00:05:23.985 "iscsi_create_auth_group", 00:05:23.985 "iscsi_set_discovery_auth", 00:05:23.985 "iscsi_get_options", 00:05:23.985 "iscsi_target_node_request_logout", 00:05:23.985 "iscsi_target_node_set_redirect", 00:05:23.985 "iscsi_target_node_set_auth", 00:05:23.985 "iscsi_target_node_add_lun", 00:05:23.985 "iscsi_get_stats", 00:05:23.985 "iscsi_get_connections", 00:05:23.985 "iscsi_portal_group_set_auth", 00:05:23.985 "iscsi_start_portal_group", 00:05:23.985 "iscsi_delete_portal_group", 00:05:23.985 "iscsi_create_portal_group", 00:05:23.985 "iscsi_get_portal_groups", 00:05:23.985 "iscsi_delete_target_node", 00:05:23.985 "iscsi_target_node_remove_pg_ig_maps", 00:05:23.985 "iscsi_target_node_add_pg_ig_maps", 00:05:23.985 "iscsi_create_target_node", 00:05:23.985 "iscsi_get_target_nodes", 00:05:23.985 "iscsi_delete_initiator_group", 00:05:23.985 "iscsi_initiator_group_remove_initiators", 00:05:23.985 "iscsi_initiator_group_add_initiators", 00:05:23.985 "iscsi_create_initiator_group", 00:05:23.985 "iscsi_get_initiator_groups", 00:05:23.985 "nvmf_set_crdt", 00:05:23.985 "nvmf_set_config", 00:05:23.985 "nvmf_set_max_subsystems", 00:05:23.985 "nvmf_stop_mdns_prr", 00:05:23.985 "nvmf_publish_mdns_prr", 00:05:23.985 "nvmf_subsystem_get_listeners", 00:05:23.985 "nvmf_subsystem_get_qpairs", 00:05:23.985 "nvmf_subsystem_get_controllers", 00:05:23.985 "nvmf_get_stats", 00:05:23.985 "nvmf_get_transports", 00:05:23.985 "nvmf_create_transport", 00:05:23.985 "nvmf_get_targets", 00:05:23.985 "nvmf_delete_target", 00:05:23.985 "nvmf_create_target", 00:05:23.985 "nvmf_subsystem_allow_any_host", 00:05:23.985 "nvmf_subsystem_set_keys", 00:05:23.985 "nvmf_subsystem_remove_host", 00:05:23.985 "nvmf_subsystem_add_host", 00:05:23.985 "nvmf_ns_remove_host", 00:05:23.985 "nvmf_ns_add_host", 00:05:23.985 "nvmf_subsystem_remove_ns", 00:05:23.985 "nvmf_subsystem_set_ns_ana_group", 00:05:23.985 "nvmf_subsystem_add_ns", 00:05:23.985 "nvmf_subsystem_listener_set_ana_state", 00:05:23.985 "nvmf_discovery_get_referrals", 00:05:23.985 "nvmf_discovery_remove_referral", 00:05:23.985 "nvmf_discovery_add_referral", 00:05:23.985 "nvmf_subsystem_remove_listener", 00:05:23.985 "nvmf_subsystem_add_listener", 00:05:23.985 "nvmf_delete_subsystem", 00:05:23.985 "nvmf_create_subsystem", 00:05:23.985 "nvmf_get_subsystems", 00:05:23.985 "env_dpdk_get_mem_stats", 00:05:23.985 "nbd_get_disks", 00:05:23.985 "nbd_stop_disk", 00:05:23.985 "nbd_start_disk", 00:05:23.985 "ublk_recover_disk", 00:05:23.985 "ublk_get_disks", 00:05:23.985 "ublk_stop_disk", 00:05:23.985 "ublk_start_disk", 00:05:23.985 "ublk_destroy_target", 00:05:23.985 "ublk_create_target", 00:05:23.985 "virtio_blk_create_transport", 00:05:23.985 "virtio_blk_get_transports", 00:05:23.985 "vhost_controller_set_coalescing", 00:05:23.985 "vhost_get_controllers", 00:05:23.985 "vhost_delete_controller", 00:05:23.985 "vhost_create_blk_controller", 00:05:23.985 "vhost_scsi_controller_remove_target", 00:05:23.985 "vhost_scsi_controller_add_target", 00:05:23.985 "vhost_start_scsi_controller", 00:05:23.985 "vhost_create_scsi_controller", 00:05:23.985 "thread_set_cpumask", 00:05:23.985 "scheduler_set_options", 00:05:23.985 "framework_get_governor", 00:05:23.985 "framework_get_scheduler", 00:05:23.985 "framework_set_scheduler", 00:05:23.985 "framework_get_reactors", 00:05:23.985 "thread_get_io_channels", 00:05:23.985 "thread_get_pollers", 00:05:23.985 "thread_get_stats", 00:05:23.985 "framework_monitor_context_switch", 00:05:23.985 "spdk_kill_instance", 00:05:23.985 "log_enable_timestamps", 00:05:23.985 "log_get_flags", 00:05:23.985 "log_clear_flag", 00:05:23.985 "log_set_flag", 00:05:23.985 "log_get_level", 00:05:23.985 "log_set_level", 00:05:23.985 "log_get_print_level", 00:05:23.985 "log_set_print_level", 00:05:23.985 "framework_enable_cpumask_locks", 00:05:23.985 "framework_disable_cpumask_locks", 00:05:23.985 "framework_wait_init", 00:05:23.985 "framework_start_init", 00:05:23.985 "scsi_get_devices", 00:05:23.985 "bdev_get_histogram", 00:05:23.985 "bdev_enable_histogram", 00:05:23.985 "bdev_set_qos_limit", 00:05:23.985 "bdev_set_qd_sampling_period", 00:05:23.985 "bdev_get_bdevs", 00:05:23.985 "bdev_reset_iostat", 00:05:23.985 "bdev_get_iostat", 00:05:23.985 "bdev_examine", 00:05:23.985 "bdev_wait_for_examine", 00:05:23.985 "bdev_set_options", 00:05:23.985 "accel_get_stats", 00:05:23.985 "accel_set_options", 00:05:23.985 "accel_set_driver", 00:05:23.985 "accel_crypto_key_destroy", 00:05:23.985 "accel_crypto_keys_get", 00:05:23.985 "accel_crypto_key_create", 00:05:23.985 "accel_assign_opc", 00:05:23.985 "accel_get_module_info", 00:05:23.985 "accel_get_opc_assignments", 00:05:23.985 "vmd_rescan", 00:05:23.985 "vmd_remove_device", 00:05:23.985 "vmd_enable", 00:05:23.985 "sock_get_default_impl", 00:05:23.985 "sock_set_default_impl", 00:05:23.985 "sock_impl_set_options", 00:05:23.985 "sock_impl_get_options", 00:05:23.985 "iobuf_get_stats", 00:05:23.985 "iobuf_set_options", 00:05:23.985 "keyring_get_keys", 00:05:23.985 "framework_get_pci_devices", 00:05:23.985 "framework_get_config", 00:05:23.985 "framework_get_subsystems", 00:05:23.985 "fsdev_set_opts", 00:05:23.985 "fsdev_get_opts", 00:05:23.985 "trace_get_info", 00:05:23.985 "trace_get_tpoint_group_mask", 00:05:23.985 "trace_disable_tpoint_group", 00:05:23.985 "trace_enable_tpoint_group", 00:05:23.985 "trace_clear_tpoint_mask", 00:05:23.985 "trace_set_tpoint_mask", 00:05:23.985 "notify_get_notifications", 00:05:23.985 "notify_get_types", 00:05:23.985 "spdk_get_version", 00:05:23.985 "rpc_get_methods" 00:05:23.985 ] 00:05:23.985 09:22:51 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:23.985 09:22:51 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:23.985 09:22:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:23.985 09:22:51 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:23.985 09:22:51 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 71716 00:05:23.985 09:22:51 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 71716 ']' 00:05:23.985 09:22:51 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 71716 00:05:23.985 09:22:51 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:23.985 09:22:51 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:23.985 09:22:51 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71716 00:05:23.985 killing process with pid 71716 00:05:23.985 09:22:51 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:23.985 09:22:51 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:23.985 09:22:51 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71716' 00:05:23.985 09:22:51 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 71716 00:05:23.985 09:22:51 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 71716 00:05:24.246 ************************************ 00:05:24.246 END TEST spdkcli_tcp 00:05:24.246 ************************************ 00:05:24.246 00:05:24.246 real 0m1.694s 00:05:24.246 user 0m3.005s 00:05:24.246 sys 0m0.403s 00:05:24.246 09:22:51 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.246 09:22:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:24.508 09:22:51 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:24.508 09:22:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.508 09:22:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.508 09:22:51 -- common/autotest_common.sh@10 -- # set +x 00:05:24.508 ************************************ 00:05:24.508 START TEST dpdk_mem_utility 00:05:24.508 ************************************ 00:05:24.508 09:22:52 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:24.508 * Looking for test storage... 00:05:24.508 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:24.508 09:22:52 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:24.508 09:22:52 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:24.508 09:22:52 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:24.508 09:22:52 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:24.508 09:22:52 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.508 09:22:52 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.508 09:22:52 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.508 09:22:52 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.508 09:22:52 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.508 09:22:52 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.508 09:22:52 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.508 09:22:52 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.508 09:22:52 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.508 09:22:52 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.508 09:22:52 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.508 09:22:52 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:24.508 09:22:52 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:24.508 09:22:52 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.508 09:22:52 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.508 09:22:52 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:24.508 09:22:52 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:24.508 09:22:52 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.508 09:22:52 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:24.508 09:22:52 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.508 09:22:52 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:24.508 09:22:52 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:24.508 09:22:52 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.508 09:22:52 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:24.508 09:22:52 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.508 09:22:52 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.509 09:22:52 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.509 09:22:52 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:24.509 09:22:52 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.509 09:22:52 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:24.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.509 --rc genhtml_branch_coverage=1 00:05:24.509 --rc genhtml_function_coverage=1 00:05:24.509 --rc genhtml_legend=1 00:05:24.509 --rc geninfo_all_blocks=1 00:05:24.509 --rc geninfo_unexecuted_blocks=1 00:05:24.509 00:05:24.509 ' 00:05:24.509 09:22:52 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:24.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.509 --rc genhtml_branch_coverage=1 00:05:24.509 --rc genhtml_function_coverage=1 00:05:24.509 --rc genhtml_legend=1 00:05:24.509 --rc geninfo_all_blocks=1 00:05:24.509 --rc geninfo_unexecuted_blocks=1 00:05:24.509 00:05:24.509 ' 00:05:24.509 09:22:52 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:24.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.509 --rc genhtml_branch_coverage=1 00:05:24.509 --rc genhtml_function_coverage=1 00:05:24.509 --rc genhtml_legend=1 00:05:24.509 --rc geninfo_all_blocks=1 00:05:24.509 --rc geninfo_unexecuted_blocks=1 00:05:24.509 00:05:24.509 ' 00:05:24.509 09:22:52 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:24.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.509 --rc genhtml_branch_coverage=1 00:05:24.509 --rc genhtml_function_coverage=1 00:05:24.509 --rc genhtml_legend=1 00:05:24.509 --rc geninfo_all_blocks=1 00:05:24.509 --rc geninfo_unexecuted_blocks=1 00:05:24.509 00:05:24.509 ' 00:05:24.509 09:22:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:24.509 09:22:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=71811 00:05:24.509 09:22:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:24.509 09:22:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 71811 00:05:24.509 09:22:52 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 71811 ']' 00:05:24.509 09:22:52 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.509 09:22:52 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.509 09:22:52 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.509 09:22:52 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.509 09:22:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:24.769 [2024-11-29 09:22:52.247378] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:05:24.769 [2024-11-29 09:22:52.247681] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71811 ] 00:05:24.769 [2024-11-29 09:22:52.380336] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:24.769 [2024-11-29 09:22:52.412626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.769 [2024-11-29 09:22:52.440699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.715 09:22:53 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.715 09:22:53 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:25.715 09:22:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:25.715 09:22:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:25.715 09:22:53 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.715 09:22:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:25.715 { 00:05:25.715 "filename": "/tmp/spdk_mem_dump.txt" 00:05:25.715 } 00:05:25.715 09:22:53 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.715 09:22:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:25.715 DPDK memory size 818.000000 MiB in 1 heap(s) 00:05:25.715 1 heaps totaling size 818.000000 MiB 00:05:25.715 size: 818.000000 MiB heap id: 0 00:05:25.715 end heaps---------- 00:05:25.715 9 mempools totaling size 603.782043 MiB 00:05:25.715 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:25.715 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:25.715 size: 100.555481 MiB name: bdev_io_71811 00:05:25.715 size: 50.003479 MiB name: msgpool_71811 00:05:25.715 size: 36.509338 MiB name: fsdev_io_71811 00:05:25.715 size: 21.763794 MiB name: PDU_Pool 00:05:25.715 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:25.715 size: 4.133484 MiB name: evtpool_71811 00:05:25.715 size: 0.026123 MiB name: Session_Pool 00:05:25.715 end mempools------- 00:05:25.715 6 memzones totaling size 4.142822 MiB 00:05:25.715 size: 1.000366 MiB name: RG_ring_0_71811 00:05:25.715 size: 1.000366 MiB name: RG_ring_1_71811 00:05:25.715 size: 1.000366 MiB name: RG_ring_4_71811 00:05:25.715 size: 1.000366 MiB name: RG_ring_5_71811 00:05:25.715 size: 0.125366 MiB name: RG_ring_2_71811 00:05:25.715 size: 0.015991 MiB name: RG_ring_3_71811 00:05:25.715 end memzones------- 00:05:25.715 09:22:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:25.715 heap id: 0 total size: 818.000000 MiB number of busy elements: 313 number of free elements: 15 00:05:25.715 list of free elements. size: 10.943787 MiB 00:05:25.715 element at address: 0x200019200000 with size: 0.999878 MiB 00:05:25.715 element at address: 0x200019400000 with size: 0.999878 MiB 00:05:25.715 element at address: 0x200032000000 with size: 0.994446 MiB 00:05:25.715 element at address: 0x200000400000 with size: 0.993958 MiB 00:05:25.715 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:25.715 element at address: 0x200012c00000 with size: 0.944275 MiB 00:05:25.715 element at address: 0x200019600000 with size: 0.936584 MiB 00:05:25.715 element at address: 0x200000200000 with size: 0.858093 MiB 00:05:25.715 element at address: 0x20001ae00000 with size: 0.568237 MiB 00:05:25.715 element at address: 0x20000a600000 with size: 0.488892 MiB 00:05:25.715 element at address: 0x200000c00000 with size: 0.486267 MiB 00:05:25.715 element at address: 0x200019800000 with size: 0.485657 MiB 00:05:25.715 element at address: 0x200003e00000 with size: 0.480286 MiB 00:05:25.715 element at address: 0x200028200000 with size: 0.395752 MiB 00:05:25.715 element at address: 0x200000800000 with size: 0.351746 MiB 00:05:25.715 list of standard malloc elements. size: 199.127319 MiB 00:05:25.715 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:25.715 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:25.715 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:25.715 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:05:25.715 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:05:25.715 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:05:25.715 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:25.715 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:05:25.715 element at address: 0x2000002fbcc0 with size: 0.000183 MiB 00:05:25.715 element at address: 0x2000003fdec0 with size: 0.000183 MiB 00:05:25.715 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:05:25.715 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:05:25.715 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:05:25.715 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:05:25.715 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:05:25.715 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:05:25.715 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:05:25.715 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:05:25.715 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:05:25.715 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:05:25.715 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:05:25.715 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:05:25.715 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:05:25.715 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:05:25.715 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:05:25.715 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:05:25.715 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:05:25.715 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:05:25.716 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:05:25.716 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:05:25.716 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:05:25.716 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:05:25.716 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:05:25.716 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:05:25.716 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:05:25.716 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:25.716 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:25.716 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:05:25.716 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:25.716 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20000085e580 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20000087e840 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20000087e900 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20000087f080 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20000087f140 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20000087f200 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20000087f380 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20000087f440 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20000087f500 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:25.716 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:25.716 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200000c7c7c0 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200000c7c880 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200000c7c940 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200000c7ca00 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:25.716 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:25.716 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:05:25.716 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:05:25.716 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:05:25.716 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20001ae91780 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20001ae91840 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20001ae91900 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20001ae919c0 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20001ae91a80 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20001ae91b40 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20001ae91c00 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20001ae91cc0 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20001ae91d80 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20001ae91e40 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20001ae91f00 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20001ae91fc0 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20001ae92080 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20001ae92140 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20001ae92200 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20001ae922c0 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20001ae92380 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20001ae92440 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20001ae92500 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20001ae925c0 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:05:25.716 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:05:25.717 element at address: 0x200028265500 with size: 0.000183 MiB 00:05:25.717 element at address: 0x2000282655c0 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826c1c0 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826c3c0 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826c480 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826c540 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826c600 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826c6c0 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826c780 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826c840 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826c900 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826d080 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826d140 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826d200 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826d380 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826d440 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826d500 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826d680 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826d740 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826d800 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826d980 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826da40 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826db00 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826de00 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826df80 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826e040 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826e100 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826e280 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826e340 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826e400 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826e580 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826e640 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826e700 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826e880 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826e940 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826f000 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826f180 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826f240 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826f300 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826f480 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826f540 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826f600 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826f780 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826f840 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826f900 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:05:25.717 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:05:25.717 list of memzone associated elements. size: 607.928894 MiB 00:05:25.717 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:05:25.717 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:25.717 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:05:25.717 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:25.717 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:05:25.717 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_71811_0 00:05:25.717 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:25.717 associated memzone info: size: 48.002930 MiB name: MP_msgpool_71811_0 00:05:25.717 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:25.717 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_71811_0 00:05:25.717 element at address: 0x2000199be940 with size: 20.255554 MiB 00:05:25.717 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:25.717 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:05:25.717 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:25.717 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:25.717 associated memzone info: size: 3.000122 MiB name: MP_evtpool_71811_0 00:05:25.717 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:25.718 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_71811 00:05:25.718 element at address: 0x2000002fbd80 with size: 1.008118 MiB 00:05:25.718 associated memzone info: size: 1.007996 MiB name: MP_evtpool_71811 00:05:25.718 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:25.718 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:25.718 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:05:25.718 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:25.718 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:25.718 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:25.718 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:25.718 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:25.718 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:25.718 associated memzone info: size: 1.000366 MiB name: RG_ring_0_71811 00:05:25.718 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:25.718 associated memzone info: size: 1.000366 MiB name: RG_ring_1_71811 00:05:25.718 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:05:25.718 associated memzone info: size: 1.000366 MiB name: RG_ring_4_71811 00:05:25.718 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:05:25.718 associated memzone info: size: 1.000366 MiB name: RG_ring_5_71811 00:05:25.718 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:25.718 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_71811 00:05:25.718 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:25.718 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_71811 00:05:25.718 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:25.718 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:25.718 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:25.718 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:25.718 element at address: 0x20001987c540 with size: 0.250488 MiB 00:05:25.718 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:25.718 element at address: 0x2000002dbac0 with size: 0.125488 MiB 00:05:25.718 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_71811 00:05:25.718 element at address: 0x20000085e640 with size: 0.125488 MiB 00:05:25.718 associated memzone info: size: 0.125366 MiB name: RG_ring_2_71811 00:05:25.718 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:25.718 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:25.718 element at address: 0x200028265680 with size: 0.023743 MiB 00:05:25.718 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:25.718 element at address: 0x20000085a380 with size: 0.016113 MiB 00:05:25.718 associated memzone info: size: 0.015991 MiB name: RG_ring_3_71811 00:05:25.718 element at address: 0x20002826b7c0 with size: 0.002441 MiB 00:05:25.718 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:25.718 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:05:25.718 associated memzone info: size: 0.000183 MiB name: MP_msgpool_71811 00:05:25.718 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:25.718 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_71811 00:05:25.718 element at address: 0x20000085a180 with size: 0.000305 MiB 00:05:25.718 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_71811 00:05:25.718 element at address: 0x20002826c280 with size: 0.000305 MiB 00:05:25.718 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:25.718 09:22:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:25.718 09:22:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 71811 00:05:25.718 09:22:53 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 71811 ']' 00:05:25.718 09:22:53 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 71811 00:05:25.718 09:22:53 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:25.718 09:22:53 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:25.718 09:22:53 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71811 00:05:25.718 09:22:53 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:25.718 killing process with pid 71811 00:05:25.718 09:22:53 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:25.718 09:22:53 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71811' 00:05:25.718 09:22:53 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 71811 00:05:25.718 09:22:53 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 71811 00:05:25.980 00:05:25.980 real 0m1.465s 00:05:25.980 user 0m1.487s 00:05:25.980 sys 0m0.392s 00:05:25.980 ************************************ 00:05:25.980 END TEST dpdk_mem_utility 00:05:25.980 ************************************ 00:05:25.980 09:22:53 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.980 09:22:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:25.980 09:22:53 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:25.980 09:22:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.980 09:22:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.980 09:22:53 -- common/autotest_common.sh@10 -- # set +x 00:05:25.980 ************************************ 00:05:25.980 START TEST event 00:05:25.980 ************************************ 00:05:25.980 09:22:53 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:25.980 * Looking for test storage... 00:05:25.980 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:25.980 09:22:53 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:25.980 09:22:53 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:25.980 09:22:53 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:25.980 09:22:53 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:25.980 09:22:53 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:25.980 09:22:53 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:25.980 09:22:53 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:25.980 09:22:53 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:25.980 09:22:53 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:25.980 09:22:53 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:25.980 09:22:53 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:25.980 09:22:53 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:25.980 09:22:53 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:25.980 09:22:53 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:25.980 09:22:53 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:25.980 09:22:53 event -- scripts/common.sh@344 -- # case "$op" in 00:05:25.980 09:22:53 event -- scripts/common.sh@345 -- # : 1 00:05:25.980 09:22:53 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:25.980 09:22:53 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:25.980 09:22:53 event -- scripts/common.sh@365 -- # decimal 1 00:05:25.980 09:22:53 event -- scripts/common.sh@353 -- # local d=1 00:05:25.980 09:22:53 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:25.980 09:22:53 event -- scripts/common.sh@355 -- # echo 1 00:05:25.980 09:22:53 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:25.980 09:22:53 event -- scripts/common.sh@366 -- # decimal 2 00:05:25.980 09:22:53 event -- scripts/common.sh@353 -- # local d=2 00:05:25.980 09:22:53 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:25.980 09:22:53 event -- scripts/common.sh@355 -- # echo 2 00:05:25.980 09:22:53 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:25.980 09:22:53 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:25.980 09:22:53 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:25.980 09:22:53 event -- scripts/common.sh@368 -- # return 0 00:05:25.980 09:22:53 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:25.980 09:22:53 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:25.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.980 --rc genhtml_branch_coverage=1 00:05:25.980 --rc genhtml_function_coverage=1 00:05:25.980 --rc genhtml_legend=1 00:05:25.980 --rc geninfo_all_blocks=1 00:05:25.980 --rc geninfo_unexecuted_blocks=1 00:05:25.980 00:05:25.980 ' 00:05:25.980 09:22:53 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:25.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.980 --rc genhtml_branch_coverage=1 00:05:25.980 --rc genhtml_function_coverage=1 00:05:25.980 --rc genhtml_legend=1 00:05:25.980 --rc geninfo_all_blocks=1 00:05:25.980 --rc geninfo_unexecuted_blocks=1 00:05:25.980 00:05:25.980 ' 00:05:25.980 09:22:53 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:25.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.980 --rc genhtml_branch_coverage=1 00:05:25.980 --rc genhtml_function_coverage=1 00:05:25.980 --rc genhtml_legend=1 00:05:25.980 --rc geninfo_all_blocks=1 00:05:25.981 --rc geninfo_unexecuted_blocks=1 00:05:25.981 00:05:25.981 ' 00:05:25.981 09:22:53 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:25.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.981 --rc genhtml_branch_coverage=1 00:05:25.981 --rc genhtml_function_coverage=1 00:05:25.981 --rc genhtml_legend=1 00:05:25.981 --rc geninfo_all_blocks=1 00:05:25.981 --rc geninfo_unexecuted_blocks=1 00:05:25.981 00:05:25.981 ' 00:05:25.981 09:22:53 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:25.981 09:22:53 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:25.981 09:22:53 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:25.981 09:22:53 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:25.981 09:22:53 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.981 09:22:53 event -- common/autotest_common.sh@10 -- # set +x 00:05:25.981 ************************************ 00:05:25.981 START TEST event_perf 00:05:25.981 ************************************ 00:05:25.981 09:22:53 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:26.242 Running I/O for 1 seconds...[2024-11-29 09:22:53.719278] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:05:26.242 [2024-11-29 09:22:53.719398] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71891 ] 00:05:26.242 [2024-11-29 09:22:53.850773] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:26.242 [2024-11-29 09:22:53.878781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:26.242 [2024-11-29 09:22:53.901437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.242 [2024-11-29 09:22:53.901728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:26.242 [2024-11-29 09:22:53.901902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:26.242 [2024-11-29 09:22:53.902078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.248 Running I/O for 1 seconds... 00:05:27.248 lcore 0: 194210 00:05:27.248 lcore 1: 194209 00:05:27.248 lcore 2: 194212 00:05:27.248 lcore 3: 194212 00:05:27.248 done. 00:05:27.248 00:05:27.248 real 0m1.260s 00:05:27.248 user 0m4.058s 00:05:27.248 sys 0m0.084s 00:05:27.248 ************************************ 00:05:27.248 END TEST event_perf 00:05:27.248 ************************************ 00:05:27.248 09:22:54 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.248 09:22:54 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:27.508 09:22:55 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:27.508 09:22:55 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:27.508 09:22:55 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.508 09:22:55 event -- common/autotest_common.sh@10 -- # set +x 00:05:27.508 ************************************ 00:05:27.508 START TEST event_reactor 00:05:27.508 ************************************ 00:05:27.508 09:22:55 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:27.508 [2024-11-29 09:22:55.046027] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:05:27.508 [2024-11-29 09:22:55.046147] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71926 ] 00:05:27.508 [2024-11-29 09:22:55.176339] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:27.508 [2024-11-29 09:22:55.205988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.508 [2024-11-29 09:22:55.225522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.887 test_start 00:05:28.887 oneshot 00:05:28.887 tick 100 00:05:28.887 tick 100 00:05:28.887 tick 250 00:05:28.887 tick 100 00:05:28.887 tick 100 00:05:28.887 tick 250 00:05:28.887 tick 100 00:05:28.887 tick 500 00:05:28.887 tick 100 00:05:28.887 tick 100 00:05:28.887 tick 250 00:05:28.887 tick 100 00:05:28.887 tick 100 00:05:28.887 test_end 00:05:28.887 00:05:28.887 real 0m1.255s 00:05:28.887 user 0m1.082s 00:05:28.887 sys 0m0.065s 00:05:28.887 09:22:56 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.887 ************************************ 00:05:28.887 END TEST event_reactor 00:05:28.887 ************************************ 00:05:28.887 09:22:56 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:28.887 09:22:56 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:28.887 09:22:56 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:28.887 09:22:56 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.887 09:22:56 event -- common/autotest_common.sh@10 -- # set +x 00:05:28.887 ************************************ 00:05:28.887 START TEST event_reactor_perf 00:05:28.887 ************************************ 00:05:28.887 09:22:56 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:28.887 [2024-11-29 09:22:56.364802] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:05:28.887 [2024-11-29 09:22:56.364941] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71962 ] 00:05:28.887 [2024-11-29 09:22:56.497835] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:28.887 [2024-11-29 09:22:56.528701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.887 [2024-11-29 09:22:56.548706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.269 test_start 00:05:30.269 test_end 00:05:30.269 Performance: 317313 events per second 00:05:30.269 00:05:30.269 real 0m1.264s 00:05:30.269 user 0m1.081s 00:05:30.269 sys 0m0.076s 00:05:30.269 09:22:57 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.269 09:22:57 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:30.269 ************************************ 00:05:30.269 END TEST event_reactor_perf 00:05:30.269 ************************************ 00:05:30.269 09:22:57 event -- event/event.sh@49 -- # uname -s 00:05:30.269 09:22:57 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:30.269 09:22:57 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:30.269 09:22:57 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.269 09:22:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.269 09:22:57 event -- common/autotest_common.sh@10 -- # set +x 00:05:30.269 ************************************ 00:05:30.269 START TEST event_scheduler 00:05:30.269 ************************************ 00:05:30.269 09:22:57 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:30.269 * Looking for test storage... 00:05:30.269 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:30.269 09:22:57 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:30.269 09:22:57 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:30.269 09:22:57 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:30.269 09:22:57 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:30.269 09:22:57 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:30.269 09:22:57 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:30.269 09:22:57 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:30.269 09:22:57 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.269 09:22:57 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:30.269 09:22:57 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:30.269 09:22:57 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:30.269 09:22:57 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:30.269 09:22:57 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:30.269 09:22:57 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:30.269 09:22:57 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:30.269 09:22:57 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:30.269 09:22:57 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:30.269 09:22:57 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:30.269 09:22:57 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.269 09:22:57 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:30.269 09:22:57 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:30.269 09:22:57 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.269 09:22:57 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:30.269 09:22:57 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:30.269 09:22:57 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:30.269 09:22:57 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:30.269 09:22:57 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.269 09:22:57 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:30.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.269 09:22:57 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:30.269 09:22:57 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:30.269 09:22:57 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:30.269 09:22:57 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:30.270 09:22:57 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.270 09:22:57 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:30.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.270 --rc genhtml_branch_coverage=1 00:05:30.270 --rc genhtml_function_coverage=1 00:05:30.270 --rc genhtml_legend=1 00:05:30.270 --rc geninfo_all_blocks=1 00:05:30.270 --rc geninfo_unexecuted_blocks=1 00:05:30.270 00:05:30.270 ' 00:05:30.270 09:22:57 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:30.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.270 --rc genhtml_branch_coverage=1 00:05:30.270 --rc genhtml_function_coverage=1 00:05:30.270 --rc genhtml_legend=1 00:05:30.270 --rc geninfo_all_blocks=1 00:05:30.270 --rc geninfo_unexecuted_blocks=1 00:05:30.270 00:05:30.270 ' 00:05:30.270 09:22:57 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:30.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.270 --rc genhtml_branch_coverage=1 00:05:30.270 --rc genhtml_function_coverage=1 00:05:30.270 --rc genhtml_legend=1 00:05:30.270 --rc geninfo_all_blocks=1 00:05:30.270 --rc geninfo_unexecuted_blocks=1 00:05:30.270 00:05:30.270 ' 00:05:30.270 09:22:57 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:30.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.270 --rc genhtml_branch_coverage=1 00:05:30.270 --rc genhtml_function_coverage=1 00:05:30.270 --rc genhtml_legend=1 00:05:30.270 --rc geninfo_all_blocks=1 00:05:30.270 --rc geninfo_unexecuted_blocks=1 00:05:30.270 00:05:30.270 ' 00:05:30.270 09:22:57 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:30.270 09:22:57 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=72032 00:05:30.270 09:22:57 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:30.270 09:22:57 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 72032 00:05:30.270 09:22:57 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 72032 ']' 00:05:30.270 09:22:57 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.270 09:22:57 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.270 09:22:57 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.270 09:22:57 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.270 09:22:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:30.270 09:22:57 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:30.270 [2024-11-29 09:22:57.871491] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:05:30.270 [2024-11-29 09:22:57.871646] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72032 ] 00:05:30.530 [2024-11-29 09:22:58.009404] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:30.530 [2024-11-29 09:22:58.041121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:30.530 [2024-11-29 09:22:58.084236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.530 [2024-11-29 09:22:58.084767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.530 [2024-11-29 09:22:58.084883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:30.530 [2024-11-29 09:22:58.084950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:31.099 09:22:58 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.099 09:22:58 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:31.099 09:22:58 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:31.099 09:22:58 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.099 09:22:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:31.099 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:31.099 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:31.099 POWER: intel_pstate driver is not supported 00:05:31.099 POWER: cppc_cpufreq driver is not supported 00:05:31.099 POWER: amd-pstate driver is not supported 00:05:31.099 POWER: acpi-cpufreq driver is not supported 00:05:31.099 POWER: Unable to set Power Management Environment for lcore 0 00:05:31.099 [2024-11-29 09:22:58.806999] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:31.099 [2024-11-29 09:22:58.807035] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:31.099 [2024-11-29 09:22:58.807062] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:31.100 [2024-11-29 09:22:58.807092] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:31.100 [2024-11-29 09:22:58.807105] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:31.100 [2024-11-29 09:22:58.807126] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:31.100 09:22:58 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.100 09:22:58 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:31.100 09:22:58 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.100 09:22:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:31.358 [2024-11-29 09:22:58.889031] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:31.358 09:22:58 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.358 09:22:58 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:31.358 09:22:58 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.358 09:22:58 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.358 09:22:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:31.358 ************************************ 00:05:31.358 START TEST scheduler_create_thread 00:05:31.358 ************************************ 00:05:31.358 09:22:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:31.358 09:22:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:31.358 09:22:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.358 09:22:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.358 2 00:05:31.358 09:22:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.358 09:22:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:31.358 09:22:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.358 09:22:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.359 3 00:05:31.359 09:22:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.359 09:22:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:31.359 09:22:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.359 09:22:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.359 4 00:05:31.359 09:22:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.359 09:22:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:31.359 09:22:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.359 09:22:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.359 5 00:05:31.359 09:22:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.359 09:22:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:31.359 09:22:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.359 09:22:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.359 6 00:05:31.359 09:22:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.359 09:22:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:31.359 09:22:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.359 09:22:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.359 7 00:05:31.359 09:22:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.359 09:22:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:31.359 09:22:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.359 09:22:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.359 8 00:05:31.359 09:22:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.359 09:22:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:31.359 09:22:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.359 09:22:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.359 9 00:05:31.359 09:22:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.359 09:22:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:31.359 09:22:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.359 09:22:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.359 10 00:05:31.359 09:22:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.359 09:22:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:31.359 09:22:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.359 09:22:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.359 09:22:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.359 09:22:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:31.359 09:22:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:31.359 09:22:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.359 09:22:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.359 09:22:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.359 09:22:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:31.359 09:22:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.359 09:22:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.734 09:23:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.734 09:23:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:32.734 09:23:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:32.734 09:23:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.734 09:23:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.106 09:23:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.106 00:05:34.106 real 0m2.612s 00:05:34.106 user 0m0.019s 00:05:34.106 sys 0m0.003s 00:05:34.106 09:23:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.106 09:23:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.106 ************************************ 00:05:34.106 END TEST scheduler_create_thread 00:05:34.106 ************************************ 00:05:34.106 09:23:01 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:34.106 09:23:01 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 72032 00:05:34.106 09:23:01 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 72032 ']' 00:05:34.106 09:23:01 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 72032 00:05:34.106 09:23:01 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:34.106 09:23:01 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:34.106 09:23:01 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72032 00:05:34.106 09:23:01 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:34.106 09:23:01 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:34.106 killing process with pid 72032 00:05:34.106 09:23:01 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72032' 00:05:34.106 09:23:01 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 72032 00:05:34.106 09:23:01 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 72032 00:05:34.364 [2024-11-29 09:23:01.996624] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:34.624 00:05:34.624 real 0m4.535s 00:05:34.624 user 0m8.449s 00:05:34.624 sys 0m0.383s 00:05:34.624 09:23:02 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.624 ************************************ 00:05:34.624 END TEST event_scheduler 00:05:34.624 ************************************ 00:05:34.624 09:23:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:34.624 09:23:02 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:34.624 09:23:02 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:34.624 09:23:02 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.624 09:23:02 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.624 09:23:02 event -- common/autotest_common.sh@10 -- # set +x 00:05:34.624 ************************************ 00:05:34.624 START TEST app_repeat 00:05:34.624 ************************************ 00:05:34.624 09:23:02 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:34.624 09:23:02 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.624 09:23:02 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.624 09:23:02 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:34.624 09:23:02 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:34.624 09:23:02 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:34.624 09:23:02 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:34.624 09:23:02 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:34.624 09:23:02 event.app_repeat -- event/event.sh@19 -- # repeat_pid=72133 00:05:34.624 09:23:02 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:34.624 Process app_repeat pid: 72133 00:05:34.624 spdk_app_start Round 0 00:05:34.624 09:23:02 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 72133' 00:05:34.624 09:23:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:34.624 09:23:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:34.624 09:23:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 72133 /var/tmp/spdk-nbd.sock 00:05:34.624 09:23:02 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 72133 ']' 00:05:34.624 09:23:02 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:34.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:34.624 09:23:02 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.624 09:23:02 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:34.624 09:23:02 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.624 09:23:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:34.624 09:23:02 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:34.624 [2024-11-29 09:23:02.283398] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:05:34.624 [2024-11-29 09:23:02.283507] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72133 ] 00:05:34.884 [2024-11-29 09:23:02.415019] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:34.884 [2024-11-29 09:23:02.445947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:34.884 [2024-11-29 09:23:02.467636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.884 [2024-11-29 09:23:02.467713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.451 09:23:03 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.451 09:23:03 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:35.451 09:23:03 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.710 Malloc0 00:05:35.710 09:23:03 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.968 Malloc1 00:05:35.968 09:23:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.968 09:23:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.968 09:23:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.968 09:23:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:35.968 09:23:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.968 09:23:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:35.968 09:23:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.968 09:23:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.968 09:23:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.968 09:23:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:35.968 09:23:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.968 09:23:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:35.968 09:23:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:35.968 09:23:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:35.968 09:23:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.968 09:23:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:36.227 /dev/nbd0 00:05:36.227 09:23:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:36.227 09:23:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:36.227 09:23:03 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:36.227 09:23:03 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:36.227 09:23:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:36.227 09:23:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:36.227 09:23:03 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:36.227 09:23:03 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:36.227 09:23:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:36.227 09:23:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:36.227 09:23:03 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.227 1+0 records in 00:05:36.227 1+0 records out 00:05:36.227 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000632429 s, 6.5 MB/s 00:05:36.227 09:23:03 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:36.227 09:23:03 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:36.227 09:23:03 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:36.227 09:23:03 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:36.227 09:23:03 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:36.227 09:23:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.227 09:23:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.227 09:23:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:36.487 /dev/nbd1 00:05:36.487 09:23:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:36.487 09:23:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:36.487 09:23:04 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:36.487 09:23:04 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:36.487 09:23:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:36.487 09:23:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:36.487 09:23:04 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:36.487 09:23:04 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:36.487 09:23:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:36.487 09:23:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:36.487 09:23:04 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.487 1+0 records in 00:05:36.487 1+0 records out 00:05:36.487 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000303491 s, 13.5 MB/s 00:05:36.487 09:23:04 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:36.487 09:23:04 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:36.487 09:23:04 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:36.487 09:23:04 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:36.487 09:23:04 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:36.487 09:23:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.487 09:23:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.487 09:23:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.487 09:23:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.487 09:23:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:36.748 09:23:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:36.748 { 00:05:36.748 "nbd_device": "/dev/nbd0", 00:05:36.748 "bdev_name": "Malloc0" 00:05:36.748 }, 00:05:36.748 { 00:05:36.748 "nbd_device": "/dev/nbd1", 00:05:36.748 "bdev_name": "Malloc1" 00:05:36.748 } 00:05:36.748 ]' 00:05:36.748 09:23:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:36.748 { 00:05:36.748 "nbd_device": "/dev/nbd0", 00:05:36.748 "bdev_name": "Malloc0" 00:05:36.748 }, 00:05:36.748 { 00:05:36.748 "nbd_device": "/dev/nbd1", 00:05:36.748 "bdev_name": "Malloc1" 00:05:36.748 } 00:05:36.748 ]' 00:05:36.748 09:23:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:36.748 09:23:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:36.748 /dev/nbd1' 00:05:36.748 09:23:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:36.748 /dev/nbd1' 00:05:36.748 09:23:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:36.748 09:23:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:36.748 09:23:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:36.748 09:23:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:36.748 09:23:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:36.748 09:23:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:36.748 09:23:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.748 09:23:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.748 09:23:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:36.748 09:23:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:36.748 09:23:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:36.748 09:23:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:36.748 256+0 records in 00:05:36.748 256+0 records out 00:05:36.748 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00741567 s, 141 MB/s 00:05:36.748 09:23:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.748 09:23:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:36.748 256+0 records in 00:05:36.748 256+0 records out 00:05:36.748 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0248487 s, 42.2 MB/s 00:05:36.748 09:23:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.748 09:23:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:36.748 256+0 records in 00:05:36.748 256+0 records out 00:05:36.748 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0183386 s, 57.2 MB/s 00:05:36.748 09:23:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:36.748 09:23:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.748 09:23:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.748 09:23:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:36.748 09:23:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:36.748 09:23:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:36.748 09:23:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:36.748 09:23:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.748 09:23:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:36.748 09:23:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.748 09:23:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:36.748 09:23:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:36.748 09:23:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:36.748 09:23:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.748 09:23:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.748 09:23:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:36.748 09:23:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:36.748 09:23:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.748 09:23:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:37.008 09:23:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:37.008 09:23:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:37.008 09:23:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:37.008 09:23:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:37.008 09:23:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:37.008 09:23:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:37.008 09:23:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:37.008 09:23:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:37.008 09:23:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:37.008 09:23:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:37.271 09:23:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:37.271 09:23:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:37.271 09:23:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:37.271 09:23:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:37.271 09:23:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:37.271 09:23:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:37.271 09:23:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:37.271 09:23:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:37.271 09:23:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:37.271 09:23:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.271 09:23:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:37.534 09:23:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:37.534 09:23:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:37.534 09:23:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:37.534 09:23:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:37.534 09:23:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:37.534 09:23:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:37.534 09:23:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:37.534 09:23:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:37.534 09:23:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:37.534 09:23:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:37.534 09:23:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:37.534 09:23:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:37.534 09:23:05 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:37.793 09:23:05 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:37.793 [2024-11-29 09:23:05.379204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:37.793 [2024-11-29 09:23:05.394981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.793 [2024-11-29 09:23:05.394997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.793 [2024-11-29 09:23:05.424117] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:37.793 [2024-11-29 09:23:05.424167] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:41.079 09:23:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:41.079 spdk_app_start Round 1 00:05:41.079 09:23:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:41.079 09:23:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 72133 /var/tmp/spdk-nbd.sock 00:05:41.079 09:23:08 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 72133 ']' 00:05:41.079 09:23:08 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:41.079 09:23:08 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:41.079 09:23:08 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:41.079 09:23:08 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.079 09:23:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:41.079 09:23:08 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.079 09:23:08 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:41.079 09:23:08 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:41.079 Malloc0 00:05:41.079 09:23:08 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:41.338 Malloc1 00:05:41.338 09:23:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:41.338 09:23:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.338 09:23:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:41.338 09:23:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:41.338 09:23:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.338 09:23:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:41.338 09:23:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:41.338 09:23:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.338 09:23:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:41.338 09:23:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:41.338 09:23:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.338 09:23:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:41.338 09:23:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:41.338 09:23:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:41.338 09:23:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.338 09:23:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:41.595 /dev/nbd0 00:05:41.595 09:23:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:41.595 09:23:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:41.595 09:23:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:41.595 09:23:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:41.595 09:23:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:41.595 09:23:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:41.595 09:23:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:41.595 09:23:09 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:41.595 09:23:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:41.595 09:23:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:41.595 09:23:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:41.595 1+0 records in 00:05:41.595 1+0 records out 00:05:41.595 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000259153 s, 15.8 MB/s 00:05:41.595 09:23:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:41.595 09:23:09 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:41.595 09:23:09 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:41.595 09:23:09 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:41.595 09:23:09 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:41.595 09:23:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.595 09:23:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.596 09:23:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:41.854 /dev/nbd1 00:05:41.854 09:23:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:41.854 09:23:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:41.854 09:23:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:41.854 09:23:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:41.854 09:23:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:41.854 09:23:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:41.854 09:23:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:41.854 09:23:09 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:41.854 09:23:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:41.854 09:23:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:41.854 09:23:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:41.854 1+0 records in 00:05:41.854 1+0 records out 00:05:41.854 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000239778 s, 17.1 MB/s 00:05:41.854 09:23:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:41.854 09:23:09 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:41.854 09:23:09 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:41.854 09:23:09 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:41.854 09:23:09 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:41.854 09:23:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.854 09:23:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.854 09:23:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.854 09:23:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.854 09:23:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:42.112 09:23:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:42.112 { 00:05:42.112 "nbd_device": "/dev/nbd0", 00:05:42.112 "bdev_name": "Malloc0" 00:05:42.112 }, 00:05:42.112 { 00:05:42.112 "nbd_device": "/dev/nbd1", 00:05:42.112 "bdev_name": "Malloc1" 00:05:42.112 } 00:05:42.112 ]' 00:05:42.112 09:23:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:42.112 09:23:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:42.112 { 00:05:42.112 "nbd_device": "/dev/nbd0", 00:05:42.112 "bdev_name": "Malloc0" 00:05:42.112 }, 00:05:42.112 { 00:05:42.112 "nbd_device": "/dev/nbd1", 00:05:42.112 "bdev_name": "Malloc1" 00:05:42.112 } 00:05:42.112 ]' 00:05:42.112 09:23:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:42.112 /dev/nbd1' 00:05:42.112 09:23:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:42.112 /dev/nbd1' 00:05:42.112 09:23:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:42.112 09:23:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:42.112 09:23:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:42.112 09:23:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:42.112 09:23:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:42.112 09:23:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:42.112 09:23:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.112 09:23:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:42.112 09:23:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:42.112 09:23:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:42.112 09:23:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:42.112 09:23:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:42.112 256+0 records in 00:05:42.112 256+0 records out 00:05:42.112 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00618349 s, 170 MB/s 00:05:42.112 09:23:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:42.112 09:23:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:42.112 256+0 records in 00:05:42.112 256+0 records out 00:05:42.112 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0151396 s, 69.3 MB/s 00:05:42.112 09:23:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:42.112 09:23:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:42.112 256+0 records in 00:05:42.112 256+0 records out 00:05:42.112 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0181263 s, 57.8 MB/s 00:05:42.112 09:23:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:42.112 09:23:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.112 09:23:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:42.112 09:23:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:42.112 09:23:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:42.112 09:23:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:42.112 09:23:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:42.112 09:23:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:42.112 09:23:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:42.112 09:23:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:42.112 09:23:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:42.112 09:23:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:42.112 09:23:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:42.112 09:23:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.112 09:23:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.112 09:23:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:42.112 09:23:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:42.113 09:23:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:42.113 09:23:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:42.370 09:23:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:42.370 09:23:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:42.370 09:23:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:42.370 09:23:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:42.370 09:23:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:42.370 09:23:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:42.370 09:23:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:42.370 09:23:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:42.370 09:23:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:42.370 09:23:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:42.629 09:23:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:42.629 09:23:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:42.629 09:23:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:42.629 09:23:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:42.629 09:23:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:42.629 09:23:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:42.629 09:23:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:42.629 09:23:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:42.629 09:23:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:42.629 09:23:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.629 09:23:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:42.629 09:23:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:42.629 09:23:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:42.629 09:23:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:42.629 09:23:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:42.629 09:23:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:42.629 09:23:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:42.629 09:23:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:42.629 09:23:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:42.629 09:23:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:42.629 09:23:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:42.629 09:23:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:42.629 09:23:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:42.629 09:23:10 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:42.887 09:23:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:43.145 [2024-11-29 09:23:10.628230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:43.145 [2024-11-29 09:23:10.643305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.145 [2024-11-29 09:23:10.643380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.145 [2024-11-29 09:23:10.672557] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:43.145 [2024-11-29 09:23:10.672615] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:46.423 spdk_app_start Round 2 00:05:46.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:46.423 09:23:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:46.423 09:23:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:46.423 09:23:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 72133 /var/tmp/spdk-nbd.sock 00:05:46.423 09:23:13 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 72133 ']' 00:05:46.423 09:23:13 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:46.423 09:23:13 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.423 09:23:13 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:46.423 09:23:13 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.423 09:23:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:46.423 09:23:13 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.423 09:23:13 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:46.423 09:23:13 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:46.423 Malloc0 00:05:46.423 09:23:13 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:46.681 Malloc1 00:05:46.681 09:23:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:46.681 09:23:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.681 09:23:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:46.681 09:23:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:46.681 09:23:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.681 09:23:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:46.681 09:23:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:46.681 09:23:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.681 09:23:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:46.681 09:23:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:46.681 09:23:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.681 09:23:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:46.681 09:23:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:46.681 09:23:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:46.681 09:23:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.681 09:23:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:46.681 /dev/nbd0 00:05:46.940 09:23:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:46.940 09:23:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:46.940 09:23:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:46.940 09:23:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:46.940 09:23:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:46.940 09:23:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:46.940 09:23:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:46.940 09:23:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:46.940 09:23:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:46.940 09:23:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:46.940 09:23:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:46.940 1+0 records in 00:05:46.940 1+0 records out 00:05:46.940 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000176036 s, 23.3 MB/s 00:05:46.940 09:23:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:46.940 09:23:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:46.940 09:23:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:46.940 09:23:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:46.940 09:23:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:46.940 09:23:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:46.940 09:23:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.940 09:23:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:46.940 /dev/nbd1 00:05:46.940 09:23:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:46.940 09:23:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:46.940 09:23:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:46.940 09:23:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:46.940 09:23:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:46.940 09:23:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:46.940 09:23:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:46.940 09:23:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:46.940 09:23:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:46.940 09:23:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:46.940 09:23:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:46.940 1+0 records in 00:05:46.940 1+0 records out 00:05:46.940 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000217516 s, 18.8 MB/s 00:05:46.940 09:23:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:46.940 09:23:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:46.940 09:23:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:47.198 09:23:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:47.198 09:23:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:47.198 09:23:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:47.198 09:23:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:47.198 09:23:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:47.198 09:23:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.198 09:23:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:47.198 09:23:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:47.198 { 00:05:47.198 "nbd_device": "/dev/nbd0", 00:05:47.198 "bdev_name": "Malloc0" 00:05:47.198 }, 00:05:47.198 { 00:05:47.198 "nbd_device": "/dev/nbd1", 00:05:47.198 "bdev_name": "Malloc1" 00:05:47.198 } 00:05:47.198 ]' 00:05:47.198 09:23:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:47.198 09:23:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:47.198 { 00:05:47.198 "nbd_device": "/dev/nbd0", 00:05:47.198 "bdev_name": "Malloc0" 00:05:47.198 }, 00:05:47.198 { 00:05:47.198 "nbd_device": "/dev/nbd1", 00:05:47.198 "bdev_name": "Malloc1" 00:05:47.198 } 00:05:47.198 ]' 00:05:47.198 09:23:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:47.198 /dev/nbd1' 00:05:47.198 09:23:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:47.198 /dev/nbd1' 00:05:47.198 09:23:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:47.198 09:23:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:47.198 09:23:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:47.198 09:23:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:47.198 09:23:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:47.198 09:23:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:47.198 09:23:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.198 09:23:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:47.199 09:23:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:47.199 09:23:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:47.199 09:23:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:47.199 09:23:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:47.199 256+0 records in 00:05:47.199 256+0 records out 00:05:47.199 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00970164 s, 108 MB/s 00:05:47.199 09:23:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:47.199 09:23:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:47.459 256+0 records in 00:05:47.459 256+0 records out 00:05:47.459 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0180041 s, 58.2 MB/s 00:05:47.459 09:23:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:47.459 09:23:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:47.459 256+0 records in 00:05:47.459 256+0 records out 00:05:47.459 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0192746 s, 54.4 MB/s 00:05:47.459 09:23:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:47.459 09:23:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.459 09:23:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:47.459 09:23:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:47.459 09:23:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:47.459 09:23:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:47.459 09:23:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:47.459 09:23:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:47.459 09:23:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:47.459 09:23:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:47.459 09:23:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:47.459 09:23:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:47.459 09:23:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:47.459 09:23:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.459 09:23:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.459 09:23:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:47.459 09:23:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:47.459 09:23:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:47.459 09:23:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:47.459 09:23:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:47.459 09:23:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:47.459 09:23:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:47.459 09:23:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.459 09:23:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.459 09:23:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:47.459 09:23:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:47.459 09:23:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.459 09:23:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:47.459 09:23:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:47.716 09:23:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:47.716 09:23:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:47.716 09:23:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:47.716 09:23:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.716 09:23:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.716 09:23:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:47.716 09:23:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:47.716 09:23:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.716 09:23:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:47.716 09:23:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.716 09:23:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:47.975 09:23:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:47.975 09:23:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:47.975 09:23:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:47.975 09:23:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:47.975 09:23:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:47.975 09:23:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:47.975 09:23:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:47.975 09:23:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:47.975 09:23:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:47.975 09:23:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:47.975 09:23:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:47.975 09:23:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:47.975 09:23:15 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:48.331 09:23:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:48.331 [2024-11-29 09:23:15.967244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:48.616 [2024-11-29 09:23:15.988677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.616 [2024-11-29 09:23:15.988874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.616 [2024-11-29 09:23:16.031253] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:48.616 [2024-11-29 09:23:16.031310] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:51.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:51.145 09:23:18 event.app_repeat -- event/event.sh@38 -- # waitforlisten 72133 /var/tmp/spdk-nbd.sock 00:05:51.145 09:23:18 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 72133 ']' 00:05:51.145 09:23:18 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:51.145 09:23:18 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.145 09:23:18 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:51.145 09:23:18 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.145 09:23:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:51.402 09:23:19 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:51.402 09:23:19 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:51.402 09:23:19 event.app_repeat -- event/event.sh@39 -- # killprocess 72133 00:05:51.402 09:23:19 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 72133 ']' 00:05:51.402 09:23:19 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 72133 00:05:51.402 09:23:19 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:51.402 09:23:19 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:51.402 09:23:19 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72133 00:05:51.402 killing process with pid 72133 00:05:51.402 09:23:19 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:51.402 09:23:19 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:51.402 09:23:19 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72133' 00:05:51.402 09:23:19 event.app_repeat -- common/autotest_common.sh@973 -- # kill 72133 00:05:51.402 09:23:19 event.app_repeat -- common/autotest_common.sh@978 -- # wait 72133 00:05:51.659 spdk_app_start is called in Round 0. 00:05:51.659 Shutdown signal received, stop current app iteration 00:05:51.659 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 reinitialization... 00:05:51.659 spdk_app_start is called in Round 1. 00:05:51.659 Shutdown signal received, stop current app iteration 00:05:51.659 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 reinitialization... 00:05:51.659 spdk_app_start is called in Round 2. 00:05:51.659 Shutdown signal received, stop current app iteration 00:05:51.659 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 reinitialization... 00:05:51.659 spdk_app_start is called in Round 3. 00:05:51.659 Shutdown signal received, stop current app iteration 00:05:51.659 ************************************ 00:05:51.659 END TEST app_repeat 00:05:51.659 ************************************ 00:05:51.659 09:23:19 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:51.659 09:23:19 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:51.659 00:05:51.659 real 0m17.003s 00:05:51.659 user 0m37.974s 00:05:51.659 sys 0m2.121s 00:05:51.659 09:23:19 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.659 09:23:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:51.659 09:23:19 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:51.659 09:23:19 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:51.659 09:23:19 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.659 09:23:19 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.659 09:23:19 event -- common/autotest_common.sh@10 -- # set +x 00:05:51.659 ************************************ 00:05:51.659 START TEST cpu_locks 00:05:51.659 ************************************ 00:05:51.659 09:23:19 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:51.659 * Looking for test storage... 00:05:51.659 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:51.659 09:23:19 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:51.659 09:23:19 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:51.659 09:23:19 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:51.916 09:23:19 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:51.916 09:23:19 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.916 09:23:19 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.916 09:23:19 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.916 09:23:19 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.916 09:23:19 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.916 09:23:19 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.916 09:23:19 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.916 09:23:19 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.916 09:23:19 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.916 09:23:19 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.916 09:23:19 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.916 09:23:19 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:51.916 09:23:19 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:51.916 09:23:19 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.916 09:23:19 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.916 09:23:19 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:51.916 09:23:19 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:51.916 09:23:19 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.916 09:23:19 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:51.916 09:23:19 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.916 09:23:19 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:51.916 09:23:19 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:51.916 09:23:19 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.916 09:23:19 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:51.916 09:23:19 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.916 09:23:19 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.916 09:23:19 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.916 09:23:19 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:51.916 09:23:19 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.916 09:23:19 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:51.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.916 --rc genhtml_branch_coverage=1 00:05:51.916 --rc genhtml_function_coverage=1 00:05:51.916 --rc genhtml_legend=1 00:05:51.916 --rc geninfo_all_blocks=1 00:05:51.916 --rc geninfo_unexecuted_blocks=1 00:05:51.916 00:05:51.916 ' 00:05:51.916 09:23:19 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:51.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.916 --rc genhtml_branch_coverage=1 00:05:51.917 --rc genhtml_function_coverage=1 00:05:51.917 --rc genhtml_legend=1 00:05:51.917 --rc geninfo_all_blocks=1 00:05:51.917 --rc geninfo_unexecuted_blocks=1 00:05:51.917 00:05:51.917 ' 00:05:51.917 09:23:19 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:51.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.917 --rc genhtml_branch_coverage=1 00:05:51.917 --rc genhtml_function_coverage=1 00:05:51.917 --rc genhtml_legend=1 00:05:51.917 --rc geninfo_all_blocks=1 00:05:51.917 --rc geninfo_unexecuted_blocks=1 00:05:51.917 00:05:51.917 ' 00:05:51.917 09:23:19 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:51.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.917 --rc genhtml_branch_coverage=1 00:05:51.917 --rc genhtml_function_coverage=1 00:05:51.917 --rc genhtml_legend=1 00:05:51.917 --rc geninfo_all_blocks=1 00:05:51.917 --rc geninfo_unexecuted_blocks=1 00:05:51.917 00:05:51.917 ' 00:05:51.917 09:23:19 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:51.917 09:23:19 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:51.917 09:23:19 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:51.917 09:23:19 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:51.917 09:23:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.917 09:23:19 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.917 09:23:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.917 ************************************ 00:05:51.917 START TEST default_locks 00:05:51.917 ************************************ 00:05:51.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.917 09:23:19 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:51.917 09:23:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=72552 00:05:51.917 09:23:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 72552 00:05:51.917 09:23:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:51.917 09:23:19 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 72552 ']' 00:05:51.917 09:23:19 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.917 09:23:19 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.917 09:23:19 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.917 09:23:19 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.917 09:23:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.917 [2024-11-29 09:23:19.507088] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:05:51.917 [2024-11-29 09:23:19.507204] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72552 ] 00:05:51.917 [2024-11-29 09:23:19.639999] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:52.173 [2024-11-29 09:23:19.672020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.173 [2024-11-29 09:23:19.690735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.737 09:23:20 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.737 09:23:20 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:52.737 09:23:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 72552 00:05:52.737 09:23:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 72552 00:05:52.737 09:23:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:52.994 09:23:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 72552 00:05:52.994 09:23:20 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 72552 ']' 00:05:52.994 09:23:20 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 72552 00:05:52.994 09:23:20 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:52.994 09:23:20 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:52.994 09:23:20 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72552 00:05:52.994 killing process with pid 72552 00:05:52.994 09:23:20 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:52.994 09:23:20 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:52.994 09:23:20 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72552' 00:05:52.994 09:23:20 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 72552 00:05:52.994 09:23:20 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 72552 00:05:53.252 09:23:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 72552 00:05:53.252 09:23:20 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:53.252 09:23:20 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 72552 00:05:53.252 09:23:20 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:53.252 09:23:20 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:53.252 09:23:20 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:53.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.252 ERROR: process (pid: 72552) is no longer running 00:05:53.252 09:23:20 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:53.252 09:23:20 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 72552 00:05:53.252 09:23:20 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 72552 ']' 00:05:53.252 09:23:20 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.252 09:23:20 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.252 09:23:20 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.252 09:23:20 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.252 09:23:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.252 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (72552) - No such process 00:05:53.252 09:23:20 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.252 09:23:20 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:53.252 09:23:20 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:53.252 09:23:20 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:53.252 09:23:20 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:53.252 09:23:20 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:53.252 09:23:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:53.252 09:23:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:53.252 09:23:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:53.252 09:23:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:53.252 00:05:53.252 real 0m1.324s 00:05:53.252 user 0m1.367s 00:05:53.252 sys 0m0.376s 00:05:53.252 09:23:20 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.252 09:23:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.252 ************************************ 00:05:53.252 END TEST default_locks 00:05:53.252 ************************************ 00:05:53.252 09:23:20 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:53.252 09:23:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.252 09:23:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.252 09:23:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.252 ************************************ 00:05:53.252 START TEST default_locks_via_rpc 00:05:53.252 ************************************ 00:05:53.252 09:23:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:53.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.252 09:23:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=72600 00:05:53.252 09:23:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 72600 00:05:53.252 09:23:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72600 ']' 00:05:53.252 09:23:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.252 09:23:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.252 09:23:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.252 09:23:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.252 09:23:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:53.252 09:23:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.252 [2024-11-29 09:23:20.869565] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:05:53.252 [2024-11-29 09:23:20.869699] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72600 ] 00:05:53.509 [2024-11-29 09:23:21.002337] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:53.509 [2024-11-29 09:23:21.032914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.509 [2024-11-29 09:23:21.050966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.075 09:23:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.075 09:23:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:54.075 09:23:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:54.075 09:23:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.075 09:23:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.075 09:23:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.075 09:23:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:54.075 09:23:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:54.075 09:23:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:54.075 09:23:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:54.075 09:23:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:54.075 09:23:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.075 09:23:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.075 09:23:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.075 09:23:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 72600 00:05:54.075 09:23:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:54.075 09:23:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 72600 00:05:54.332 09:23:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 72600 00:05:54.332 09:23:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 72600 ']' 00:05:54.332 09:23:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 72600 00:05:54.332 09:23:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:54.333 09:23:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:54.333 09:23:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72600 00:05:54.333 killing process with pid 72600 00:05:54.333 09:23:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:54.333 09:23:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:54.333 09:23:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72600' 00:05:54.333 09:23:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 72600 00:05:54.333 09:23:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 72600 00:05:54.590 00:05:54.590 real 0m1.358s 00:05:54.590 user 0m1.411s 00:05:54.590 sys 0m0.387s 00:05:54.590 09:23:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.590 09:23:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.590 ************************************ 00:05:54.590 END TEST default_locks_via_rpc 00:05:54.590 ************************************ 00:05:54.590 09:23:22 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:54.590 09:23:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.590 09:23:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.590 09:23:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.590 ************************************ 00:05:54.590 START TEST non_locking_app_on_locked_coremask 00:05:54.590 ************************************ 00:05:54.590 09:23:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:54.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.590 09:23:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=72641 00:05:54.590 09:23:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 72641 /var/tmp/spdk.sock 00:05:54.590 09:23:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72641 ']' 00:05:54.590 09:23:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.590 09:23:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.590 09:23:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:54.590 09:23:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.591 09:23:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.591 09:23:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.591 [2024-11-29 09:23:22.272710] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:05:54.591 [2024-11-29 09:23:22.272977] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72641 ] 00:05:54.848 [2024-11-29 09:23:22.404496] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:54.848 [2024-11-29 09:23:22.434855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.848 [2024-11-29 09:23:22.453543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:55.415 09:23:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.415 09:23:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:55.415 09:23:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=72657 00:05:55.415 09:23:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 72657 /var/tmp/spdk2.sock 00:05:55.415 09:23:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:55.415 09:23:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72657 ']' 00:05:55.415 09:23:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:55.415 09:23:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.415 09:23:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:55.415 09:23:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.415 09:23:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.674 [2024-11-29 09:23:23.163748] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:05:55.674 [2024-11-29 09:23:23.163862] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72657 ] 00:05:55.674 [2024-11-29 09:23:23.298318] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:55.674 [2024-11-29 09:23:23.340805] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:55.674 [2024-11-29 09:23:23.340842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.674 [2024-11-29 09:23:23.379485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.611 09:23:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.611 09:23:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:56.611 09:23:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 72641 00:05:56.611 09:23:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:56.611 09:23:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72641 00:05:56.611 09:23:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 72641 00:05:56.611 09:23:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72641 ']' 00:05:56.611 09:23:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 72641 00:05:56.611 09:23:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:56.611 09:23:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:56.611 09:23:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72641 00:05:56.611 killing process with pid 72641 00:05:56.611 09:23:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:56.611 09:23:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:56.611 09:23:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72641' 00:05:56.611 09:23:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 72641 00:05:56.611 09:23:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 72641 00:05:57.180 09:23:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 72657 00:05:57.180 09:23:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72657 ']' 00:05:57.180 09:23:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 72657 00:05:57.180 09:23:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:57.180 09:23:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:57.180 09:23:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72657 00:05:57.180 killing process with pid 72657 00:05:57.180 09:23:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:57.180 09:23:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:57.180 09:23:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72657' 00:05:57.180 09:23:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 72657 00:05:57.180 09:23:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 72657 00:05:57.442 00:05:57.442 real 0m2.833s 00:05:57.442 user 0m3.141s 00:05:57.442 sys 0m0.755s 00:05:57.442 09:23:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.442 ************************************ 00:05:57.442 END TEST non_locking_app_on_locked_coremask 00:05:57.442 09:23:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.442 ************************************ 00:05:57.442 09:23:25 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:57.442 09:23:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.442 09:23:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.442 09:23:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.442 ************************************ 00:05:57.442 START TEST locking_app_on_unlocked_coremask 00:05:57.442 ************************************ 00:05:57.442 09:23:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:57.442 09:23:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=72715 00:05:57.442 09:23:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 72715 /var/tmp/spdk.sock 00:05:57.442 09:23:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72715 ']' 00:05:57.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.442 09:23:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.442 09:23:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.442 09:23:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.442 09:23:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.442 09:23:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:57.442 09:23:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.442 [2024-11-29 09:23:25.161196] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:05:57.442 [2024-11-29 09:23:25.161317] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72715 ] 00:05:57.701 [2024-11-29 09:23:25.293677] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:57.701 [2024-11-29 09:23:25.319884] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:57.701 [2024-11-29 09:23:25.319931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.701 [2024-11-29 09:23:25.337990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.293 09:23:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.293 09:23:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:58.293 09:23:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=72731 00:05:58.293 09:23:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 72731 /var/tmp/spdk2.sock 00:05:58.293 09:23:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72731 ']' 00:05:58.294 09:23:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:58.294 09:23:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.294 09:23:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:58.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:58.294 09:23:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.294 09:23:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.294 09:23:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:58.573 [2024-11-29 09:23:26.063048] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:05:58.573 [2024-11-29 09:23:26.063160] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72731 ] 00:05:58.573 [2024-11-29 09:23:26.195305] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:58.573 [2024-11-29 09:23:26.228300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.573 [2024-11-29 09:23:26.261732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.508 09:23:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.508 09:23:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:59.508 09:23:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 72731 00:05:59.508 09:23:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72731 00:05:59.508 09:23:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:59.508 09:23:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 72715 00:05:59.508 09:23:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72715 ']' 00:05:59.508 09:23:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 72715 00:05:59.508 09:23:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:59.508 09:23:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.508 09:23:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72715 00:05:59.508 09:23:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:59.508 09:23:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:59.508 killing process with pid 72715 00:05:59.508 09:23:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72715' 00:05:59.508 09:23:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 72715 00:05:59.508 09:23:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 72715 00:06:00.076 09:23:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 72731 00:06:00.076 09:23:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72731 ']' 00:06:00.076 09:23:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 72731 00:06:00.076 09:23:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:00.076 09:23:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:00.076 09:23:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72731 00:06:00.076 09:23:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:00.076 09:23:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:00.076 09:23:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72731' 00:06:00.076 killing process with pid 72731 00:06:00.076 09:23:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 72731 00:06:00.076 09:23:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 72731 00:06:00.334 00:06:00.334 real 0m2.838s 00:06:00.334 user 0m3.144s 00:06:00.334 sys 0m0.772s 00:06:00.334 09:23:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.334 ************************************ 00:06:00.334 END TEST locking_app_on_unlocked_coremask 00:06:00.334 ************************************ 00:06:00.334 09:23:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.334 09:23:27 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:00.334 09:23:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.334 09:23:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.334 09:23:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.334 ************************************ 00:06:00.334 START TEST locking_app_on_locked_coremask 00:06:00.334 ************************************ 00:06:00.334 09:23:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:00.334 09:23:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=72789 00:06:00.334 09:23:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 72789 /var/tmp/spdk.sock 00:06:00.334 09:23:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72789 ']' 00:06:00.334 09:23:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.334 09:23:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.334 09:23:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.334 09:23:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.334 09:23:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:00.334 09:23:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.334 [2024-11-29 09:23:28.055606] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:00.334 [2024-11-29 09:23:28.055698] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72789 ] 00:06:00.593 [2024-11-29 09:23:28.181736] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:00.593 [2024-11-29 09:23:28.205601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.593 [2024-11-29 09:23:28.224849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.158 09:23:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.158 09:23:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:01.158 09:23:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=72805 00:06:01.158 09:23:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 72805 /var/tmp/spdk2.sock 00:06:01.158 09:23:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:01.158 09:23:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 72805 /var/tmp/spdk2.sock 00:06:01.158 09:23:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:01.158 09:23:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:01.158 09:23:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.158 09:23:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:01.158 09:23:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.158 09:23:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 72805 /var/tmp/spdk2.sock 00:06:01.158 09:23:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72805 ']' 00:06:01.158 09:23:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:01.416 09:23:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:01.416 09:23:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:01.416 09:23:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.416 09:23:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.416 [2024-11-29 09:23:28.956425] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:01.416 [2024-11-29 09:23:28.956547] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72805 ] 00:06:01.416 [2024-11-29 09:23:29.087054] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:01.416 [2024-11-29 09:23:29.119969] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 72789 has claimed it. 00:06:01.416 [2024-11-29 09:23:29.120023] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:01.981 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (72805) - No such process 00:06:01.981 ERROR: process (pid: 72805) is no longer running 00:06:01.981 09:23:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.982 09:23:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:01.982 09:23:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:01.982 09:23:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:01.982 09:23:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:01.982 09:23:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:01.982 09:23:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 72789 00:06:01.982 09:23:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72789 00:06:01.982 09:23:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:02.240 09:23:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 72789 00:06:02.240 09:23:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72789 ']' 00:06:02.240 09:23:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 72789 00:06:02.240 09:23:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:02.240 09:23:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:02.240 09:23:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72789 00:06:02.240 09:23:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:02.240 09:23:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:02.240 killing process with pid 72789 00:06:02.240 09:23:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72789' 00:06:02.240 09:23:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 72789 00:06:02.240 09:23:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 72789 00:06:02.499 00:06:02.499 real 0m2.188s 00:06:02.499 user 0m2.433s 00:06:02.499 sys 0m0.534s 00:06:02.499 09:23:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.499 09:23:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.499 ************************************ 00:06:02.499 END TEST locking_app_on_locked_coremask 00:06:02.499 ************************************ 00:06:02.499 09:23:30 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:02.499 09:23:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.499 09:23:30 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.499 09:23:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.499 ************************************ 00:06:02.499 START TEST locking_overlapped_coremask 00:06:02.499 ************************************ 00:06:02.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.499 09:23:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:02.499 09:23:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=72847 00:06:02.499 09:23:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 72847 /var/tmp/spdk.sock 00:06:02.499 09:23:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 72847 ']' 00:06:02.499 09:23:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.499 09:23:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.499 09:23:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.499 09:23:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.499 09:23:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.499 09:23:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:02.756 [2024-11-29 09:23:30.281933] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:02.756 [2024-11-29 09:23:30.282058] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72847 ] 00:06:02.756 [2024-11-29 09:23:30.415096] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:02.756 [2024-11-29 09:23:30.441170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:02.756 [2024-11-29 09:23:30.463488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.756 [2024-11-29 09:23:30.463732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.757 [2024-11-29 09:23:30.463818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:03.691 09:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.691 09:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:03.691 09:23:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=72865 00:06:03.691 09:23:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 72865 /var/tmp/spdk2.sock 00:06:03.691 09:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:03.691 09:23:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:03.691 09:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 72865 /var/tmp/spdk2.sock 00:06:03.691 09:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:03.691 09:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:03.691 09:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:03.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:03.691 09:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:03.691 09:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 72865 /var/tmp/spdk2.sock 00:06:03.691 09:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 72865 ']' 00:06:03.692 09:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:03.692 09:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.692 09:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:03.692 09:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.692 09:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.692 [2024-11-29 09:23:31.183027] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:03.692 [2024-11-29 09:23:31.183142] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72865 ] 00:06:03.692 [2024-11-29 09:23:31.316275] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:03.692 [2024-11-29 09:23:31.362011] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72847 has claimed it. 00:06:03.692 [2024-11-29 09:23:31.362063] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:04.261 ERROR: process (pid: 72865) is no longer running 00:06:04.261 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (72865) - No such process 00:06:04.261 09:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.261 09:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:04.261 09:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:04.261 09:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:04.261 09:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:04.261 09:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:04.261 09:23:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:04.261 09:23:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:04.261 09:23:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:04.261 09:23:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:04.261 09:23:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 72847 00:06:04.261 09:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 72847 ']' 00:06:04.261 09:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 72847 00:06:04.261 09:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:04.262 09:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:04.262 09:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72847 00:06:04.262 09:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:04.262 09:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:04.262 09:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72847' 00:06:04.262 killing process with pid 72847 00:06:04.262 09:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 72847 00:06:04.262 09:23:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 72847 00:06:04.521 00:06:04.521 real 0m1.867s 00:06:04.521 user 0m5.088s 00:06:04.521 sys 0m0.394s 00:06:04.521 09:23:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.521 09:23:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.521 ************************************ 00:06:04.521 END TEST locking_overlapped_coremask 00:06:04.521 ************************************ 00:06:04.521 09:23:32 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:04.521 09:23:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.521 09:23:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.521 09:23:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.521 ************************************ 00:06:04.521 START TEST locking_overlapped_coremask_via_rpc 00:06:04.521 ************************************ 00:06:04.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.521 09:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:04.521 09:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=72907 00:06:04.521 09:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 72907 /var/tmp/spdk.sock 00:06:04.521 09:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72907 ']' 00:06:04.521 09:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.521 09:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:04.521 09:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.521 09:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.521 09:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.521 09:23:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.521 [2024-11-29 09:23:32.197878] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:04.521 [2024-11-29 09:23:32.197995] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72907 ] 00:06:04.780 [2024-11-29 09:23:32.331650] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:04.780 [2024-11-29 09:23:32.361133] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:04.780 [2024-11-29 09:23:32.361261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:04.780 [2024-11-29 09:23:32.383439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.780 [2024-11-29 09:23:32.383718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.780 [2024-11-29 09:23:32.383775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:05.347 09:23:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.347 09:23:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:05.347 09:23:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=72925 00:06:05.347 09:23:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 72925 /var/tmp/spdk2.sock 00:06:05.347 09:23:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72925 ']' 00:06:05.347 09:23:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:05.347 09:23:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:05.347 09:23:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.347 09:23:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:05.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:05.348 09:23:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.348 09:23:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.609 [2024-11-29 09:23:33.088305] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:05.609 [2024-11-29 09:23:33.088779] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72925 ] 00:06:05.609 [2024-11-29 09:23:33.217424] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:05.609 [2024-11-29 09:23:33.263212] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:05.609 [2024-11-29 09:23:33.263259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:05.609 [2024-11-29 09:23:33.330849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:05.869 [2024-11-29 09:23:33.334823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:05.869 [2024-11-29 09:23:33.334895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:06.436 09:23:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.436 09:23:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:06.436 09:23:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:06.436 09:23:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.436 09:23:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.436 09:23:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.436 09:23:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:06.436 09:23:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:06.436 09:23:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:06.436 09:23:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:06.436 09:23:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:06.436 09:23:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:06.436 09:23:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:06.436 09:23:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:06.436 09:23:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.436 09:23:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.436 [2024-11-29 09:23:33.968734] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72907 has claimed it. 00:06:06.436 request: 00:06:06.436 { 00:06:06.436 "method": "framework_enable_cpumask_locks", 00:06:06.436 "req_id": 1 00:06:06.436 } 00:06:06.436 Got JSON-RPC error response 00:06:06.436 response: 00:06:06.436 { 00:06:06.436 "code": -32603, 00:06:06.436 "message": "Failed to claim CPU core: 2" 00:06:06.436 } 00:06:06.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.436 09:23:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:06.436 09:23:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:06.436 09:23:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:06.436 09:23:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:06.436 09:23:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:06.436 09:23:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 72907 /var/tmp/spdk.sock 00:06:06.436 09:23:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72907 ']' 00:06:06.436 09:23:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.436 09:23:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.436 09:23:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.436 09:23:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.436 09:23:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.695 09:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.695 09:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:06.695 09:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 72925 /var/tmp/spdk2.sock 00:06:06.695 09:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72925 ']' 00:06:06.695 09:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:06.695 09:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.695 09:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:06.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:06.695 09:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.695 09:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.695 09:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.695 09:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:06.695 09:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:06.695 09:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:06.695 09:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:06.695 09:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:06.695 00:06:06.695 real 0m2.284s 00:06:06.695 user 0m1.079s 00:06:06.695 sys 0m0.128s 00:06:06.695 09:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.695 09:23:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.695 ************************************ 00:06:06.695 END TEST locking_overlapped_coremask_via_rpc 00:06:06.695 ************************************ 00:06:06.954 09:23:34 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:06.954 09:23:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72907 ]] 00:06:06.954 09:23:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72907 00:06:06.954 09:23:34 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72907 ']' 00:06:06.954 09:23:34 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72907 00:06:06.954 09:23:34 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:06.954 09:23:34 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:06.954 09:23:34 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72907 00:06:06.954 09:23:34 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:06.954 09:23:34 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:06.954 09:23:34 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72907' 00:06:06.954 killing process with pid 72907 00:06:06.954 09:23:34 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 72907 00:06:06.954 09:23:34 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 72907 00:06:07.212 09:23:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72925 ]] 00:06:07.212 09:23:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72925 00:06:07.212 09:23:34 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72925 ']' 00:06:07.212 09:23:34 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72925 00:06:07.212 09:23:34 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:07.212 09:23:34 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:07.212 09:23:34 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72925 00:06:07.212 killing process with pid 72925 00:06:07.212 09:23:34 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:07.212 09:23:34 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:07.212 09:23:34 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72925' 00:06:07.212 09:23:34 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 72925 00:06:07.212 09:23:34 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 72925 00:06:07.479 09:23:35 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:07.479 Process with pid 72907 is not found 00:06:07.479 09:23:35 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:07.479 09:23:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72907 ]] 00:06:07.479 09:23:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72907 00:06:07.479 09:23:35 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72907 ']' 00:06:07.479 09:23:35 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72907 00:06:07.479 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (72907) - No such process 00:06:07.479 09:23:35 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 72907 is not found' 00:06:07.479 09:23:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72925 ]] 00:06:07.479 09:23:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72925 00:06:07.479 Process with pid 72925 is not found 00:06:07.479 09:23:35 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72925 ']' 00:06:07.479 09:23:35 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72925 00:06:07.479 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (72925) - No such process 00:06:07.479 09:23:35 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 72925 is not found' 00:06:07.479 09:23:35 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:07.479 ************************************ 00:06:07.479 END TEST cpu_locks 00:06:07.479 ************************************ 00:06:07.479 00:06:07.479 real 0m15.750s 00:06:07.479 user 0m27.945s 00:06:07.479 sys 0m4.174s 00:06:07.479 09:23:35 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.479 09:23:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.479 ************************************ 00:06:07.479 END TEST event 00:06:07.479 ************************************ 00:06:07.479 00:06:07.479 real 0m41.559s 00:06:07.479 user 1m20.769s 00:06:07.479 sys 0m7.137s 00:06:07.479 09:23:35 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.479 09:23:35 event -- common/autotest_common.sh@10 -- # set +x 00:06:07.479 09:23:35 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:07.479 09:23:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.479 09:23:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.479 09:23:35 -- common/autotest_common.sh@10 -- # set +x 00:06:07.479 ************************************ 00:06:07.479 START TEST thread 00:06:07.479 ************************************ 00:06:07.479 09:23:35 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:07.479 * Looking for test storage... 00:06:07.479 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:07.479 09:23:35 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:07.479 09:23:35 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:07.479 09:23:35 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:07.753 09:23:35 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:07.753 09:23:35 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:07.753 09:23:35 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:07.753 09:23:35 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:07.753 09:23:35 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:07.753 09:23:35 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:07.753 09:23:35 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:07.753 09:23:35 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:07.753 09:23:35 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:07.753 09:23:35 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:07.753 09:23:35 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:07.753 09:23:35 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:07.753 09:23:35 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:07.753 09:23:35 thread -- scripts/common.sh@345 -- # : 1 00:06:07.753 09:23:35 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:07.753 09:23:35 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:07.753 09:23:35 thread -- scripts/common.sh@365 -- # decimal 1 00:06:07.753 09:23:35 thread -- scripts/common.sh@353 -- # local d=1 00:06:07.753 09:23:35 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:07.753 09:23:35 thread -- scripts/common.sh@355 -- # echo 1 00:06:07.753 09:23:35 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:07.753 09:23:35 thread -- scripts/common.sh@366 -- # decimal 2 00:06:07.753 09:23:35 thread -- scripts/common.sh@353 -- # local d=2 00:06:07.753 09:23:35 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:07.753 09:23:35 thread -- scripts/common.sh@355 -- # echo 2 00:06:07.753 09:23:35 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:07.753 09:23:35 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:07.753 09:23:35 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:07.753 09:23:35 thread -- scripts/common.sh@368 -- # return 0 00:06:07.753 09:23:35 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:07.753 09:23:35 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:07.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.753 --rc genhtml_branch_coverage=1 00:06:07.753 --rc genhtml_function_coverage=1 00:06:07.753 --rc genhtml_legend=1 00:06:07.753 --rc geninfo_all_blocks=1 00:06:07.753 --rc geninfo_unexecuted_blocks=1 00:06:07.753 00:06:07.753 ' 00:06:07.753 09:23:35 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:07.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.753 --rc genhtml_branch_coverage=1 00:06:07.753 --rc genhtml_function_coverage=1 00:06:07.753 --rc genhtml_legend=1 00:06:07.753 --rc geninfo_all_blocks=1 00:06:07.753 --rc geninfo_unexecuted_blocks=1 00:06:07.753 00:06:07.753 ' 00:06:07.753 09:23:35 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:07.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.753 --rc genhtml_branch_coverage=1 00:06:07.753 --rc genhtml_function_coverage=1 00:06:07.753 --rc genhtml_legend=1 00:06:07.753 --rc geninfo_all_blocks=1 00:06:07.753 --rc geninfo_unexecuted_blocks=1 00:06:07.753 00:06:07.753 ' 00:06:07.753 09:23:35 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:07.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.753 --rc genhtml_branch_coverage=1 00:06:07.753 --rc genhtml_function_coverage=1 00:06:07.753 --rc genhtml_legend=1 00:06:07.753 --rc geninfo_all_blocks=1 00:06:07.753 --rc geninfo_unexecuted_blocks=1 00:06:07.753 00:06:07.753 ' 00:06:07.753 09:23:35 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:07.753 09:23:35 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:07.753 09:23:35 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.753 09:23:35 thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.753 ************************************ 00:06:07.753 START TEST thread_poller_perf 00:06:07.753 ************************************ 00:06:07.753 09:23:35 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:07.753 [2024-11-29 09:23:35.291135] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:07.753 [2024-11-29 09:23:35.291357] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73052 ] 00:06:07.753 [2024-11-29 09:23:35.420401] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:07.753 [2024-11-29 09:23:35.437496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.753 [2024-11-29 09:23:35.456377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.753 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:09.134 [2024-11-29T09:23:36.860Z] ====================================== 00:06:09.134 [2024-11-29T09:23:36.860Z] busy:2610016968 (cyc) 00:06:09.134 [2024-11-29T09:23:36.860Z] total_run_count: 407000 00:06:09.134 [2024-11-29T09:23:36.860Z] tsc_hz: 2600000000 (cyc) 00:06:09.135 [2024-11-29T09:23:36.861Z] ====================================== 00:06:09.135 [2024-11-29T09:23:36.861Z] poller_cost: 6412 (cyc), 2466 (nsec) 00:06:09.135 00:06:09.135 real 0m1.251s 00:06:09.135 ************************************ 00:06:09.135 END TEST thread_poller_perf 00:06:09.135 ************************************ 00:06:09.135 user 0m1.092s 00:06:09.135 sys 0m0.053s 00:06:09.135 09:23:36 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.135 09:23:36 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:09.135 09:23:36 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:09.135 09:23:36 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:09.135 09:23:36 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.135 09:23:36 thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.135 ************************************ 00:06:09.135 START TEST thread_poller_perf 00:06:09.135 ************************************ 00:06:09.135 09:23:36 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:09.135 [2024-11-29 09:23:36.596007] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:09.135 [2024-11-29 09:23:36.596128] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73083 ] 00:06:09.135 [2024-11-29 09:23:36.722925] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:09.135 [2024-11-29 09:23:36.748743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.135 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:09.135 [2024-11-29 09:23:36.771429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.513 [2024-11-29T09:23:38.239Z] ====================================== 00:06:10.513 [2024-11-29T09:23:38.239Z] busy:2603195208 (cyc) 00:06:10.513 [2024-11-29T09:23:38.239Z] total_run_count: 5360000 00:06:10.513 [2024-11-29T09:23:38.239Z] tsc_hz: 2600000000 (cyc) 00:06:10.513 [2024-11-29T09:23:38.239Z] ====================================== 00:06:10.513 [2024-11-29T09:23:38.239Z] poller_cost: 485 (cyc), 186 (nsec) 00:06:10.513 00:06:10.513 real 0m1.255s 00:06:10.513 user 0m1.078s 00:06:10.513 sys 0m0.072s 00:06:10.513 ************************************ 00:06:10.513 END TEST thread_poller_perf 00:06:10.513 ************************************ 00:06:10.513 09:23:37 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.513 09:23:37 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:10.513 09:23:37 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:10.513 00:06:10.513 real 0m2.743s 00:06:10.513 user 0m2.276s 00:06:10.513 sys 0m0.249s 00:06:10.513 ************************************ 00:06:10.513 END TEST thread 00:06:10.513 ************************************ 00:06:10.513 09:23:37 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.513 09:23:37 thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.513 09:23:37 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:10.513 09:23:37 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:10.513 09:23:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.513 09:23:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.513 09:23:37 -- common/autotest_common.sh@10 -- # set +x 00:06:10.513 ************************************ 00:06:10.513 START TEST app_cmdline 00:06:10.513 ************************************ 00:06:10.513 09:23:37 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:10.513 * Looking for test storage... 00:06:10.513 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:10.513 09:23:37 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:10.513 09:23:37 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:10.513 09:23:37 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:10.513 09:23:38 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:10.513 09:23:38 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.513 09:23:38 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.513 09:23:38 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.513 09:23:38 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.513 09:23:38 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.513 09:23:38 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.513 09:23:38 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.513 09:23:38 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.513 09:23:38 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.513 09:23:38 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.513 09:23:38 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.513 09:23:38 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:10.513 09:23:38 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:10.513 09:23:38 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.513 09:23:38 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.513 09:23:38 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:10.514 09:23:38 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:10.514 09:23:38 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.514 09:23:38 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:10.514 09:23:38 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.514 09:23:38 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:10.514 09:23:38 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:10.514 09:23:38 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.514 09:23:38 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:10.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.514 09:23:38 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.514 09:23:38 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.514 09:23:38 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.514 09:23:38 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:10.514 09:23:38 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.514 09:23:38 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:10.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.514 --rc genhtml_branch_coverage=1 00:06:10.514 --rc genhtml_function_coverage=1 00:06:10.514 --rc genhtml_legend=1 00:06:10.514 --rc geninfo_all_blocks=1 00:06:10.514 --rc geninfo_unexecuted_blocks=1 00:06:10.514 00:06:10.514 ' 00:06:10.514 09:23:38 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:10.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.514 --rc genhtml_branch_coverage=1 00:06:10.514 --rc genhtml_function_coverage=1 00:06:10.514 --rc genhtml_legend=1 00:06:10.514 --rc geninfo_all_blocks=1 00:06:10.514 --rc geninfo_unexecuted_blocks=1 00:06:10.514 00:06:10.514 ' 00:06:10.514 09:23:38 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:10.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.514 --rc genhtml_branch_coverage=1 00:06:10.514 --rc genhtml_function_coverage=1 00:06:10.514 --rc genhtml_legend=1 00:06:10.514 --rc geninfo_all_blocks=1 00:06:10.514 --rc geninfo_unexecuted_blocks=1 00:06:10.514 00:06:10.514 ' 00:06:10.514 09:23:38 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:10.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.514 --rc genhtml_branch_coverage=1 00:06:10.514 --rc genhtml_function_coverage=1 00:06:10.514 --rc genhtml_legend=1 00:06:10.514 --rc geninfo_all_blocks=1 00:06:10.514 --rc geninfo_unexecuted_blocks=1 00:06:10.514 00:06:10.514 ' 00:06:10.514 09:23:38 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:10.514 09:23:38 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=73172 00:06:10.514 09:23:38 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 73172 00:06:10.514 09:23:38 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 73172 ']' 00:06:10.514 09:23:38 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.514 09:23:38 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.514 09:23:38 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.514 09:23:38 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.514 09:23:38 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:10.514 09:23:38 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:10.514 [2024-11-29 09:23:38.112710] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:10.514 [2024-11-29 09:23:38.112978] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73172 ] 00:06:10.773 [2024-11-29 09:23:38.246114] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:10.773 [2024-11-29 09:23:38.269097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.773 [2024-11-29 09:23:38.290970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.343 09:23:38 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.343 09:23:38 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:11.343 09:23:38 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:11.603 { 00:06:11.603 "version": "SPDK v25.01-pre git sha1 35cd3e84d", 00:06:11.603 "fields": { 00:06:11.603 "major": 25, 00:06:11.603 "minor": 1, 00:06:11.603 "patch": 0, 00:06:11.603 "suffix": "-pre", 00:06:11.603 "commit": "35cd3e84d" 00:06:11.603 } 00:06:11.603 } 00:06:11.603 09:23:39 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:11.603 09:23:39 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:11.603 09:23:39 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:11.603 09:23:39 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:11.603 09:23:39 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:11.603 09:23:39 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:11.603 09:23:39 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:11.603 09:23:39 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.603 09:23:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:11.603 09:23:39 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.603 09:23:39 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:11.603 09:23:39 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:11.603 09:23:39 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:11.603 09:23:39 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:11.603 09:23:39 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:11.603 09:23:39 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:11.603 09:23:39 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.603 09:23:39 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:11.603 09:23:39 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.603 09:23:39 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:11.603 09:23:39 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.603 09:23:39 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:11.603 09:23:39 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:11.603 09:23:39 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:11.865 request: 00:06:11.865 { 00:06:11.865 "method": "env_dpdk_get_mem_stats", 00:06:11.865 "req_id": 1 00:06:11.865 } 00:06:11.865 Got JSON-RPC error response 00:06:11.865 response: 00:06:11.865 { 00:06:11.865 "code": -32601, 00:06:11.865 "message": "Method not found" 00:06:11.865 } 00:06:11.865 09:23:39 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:11.865 09:23:39 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:11.865 09:23:39 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:11.865 09:23:39 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:11.865 09:23:39 app_cmdline -- app/cmdline.sh@1 -- # killprocess 73172 00:06:11.865 09:23:39 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 73172 ']' 00:06:11.865 09:23:39 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 73172 00:06:11.865 09:23:39 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:11.865 09:23:39 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.865 09:23:39 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73172 00:06:11.865 killing process with pid 73172 00:06:11.865 09:23:39 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:11.865 09:23:39 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:11.865 09:23:39 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73172' 00:06:11.865 09:23:39 app_cmdline -- common/autotest_common.sh@973 -- # kill 73172 00:06:11.865 09:23:39 app_cmdline -- common/autotest_common.sh@978 -- # wait 73172 00:06:12.125 ************************************ 00:06:12.125 END TEST app_cmdline 00:06:12.125 ************************************ 00:06:12.125 00:06:12.125 real 0m1.798s 00:06:12.125 user 0m2.087s 00:06:12.125 sys 0m0.427s 00:06:12.125 09:23:39 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.125 09:23:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:12.125 09:23:39 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:12.125 09:23:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.125 09:23:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.125 09:23:39 -- common/autotest_common.sh@10 -- # set +x 00:06:12.125 ************************************ 00:06:12.125 START TEST version 00:06:12.125 ************************************ 00:06:12.125 09:23:39 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:12.125 * Looking for test storage... 00:06:12.125 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:12.125 09:23:39 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:12.125 09:23:39 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:12.125 09:23:39 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:12.384 09:23:39 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:12.384 09:23:39 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.384 09:23:39 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.384 09:23:39 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.384 09:23:39 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.384 09:23:39 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.384 09:23:39 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.384 09:23:39 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.384 09:23:39 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.384 09:23:39 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.384 09:23:39 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.384 09:23:39 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.384 09:23:39 version -- scripts/common.sh@344 -- # case "$op" in 00:06:12.384 09:23:39 version -- scripts/common.sh@345 -- # : 1 00:06:12.384 09:23:39 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.384 09:23:39 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.384 09:23:39 version -- scripts/common.sh@365 -- # decimal 1 00:06:12.384 09:23:39 version -- scripts/common.sh@353 -- # local d=1 00:06:12.384 09:23:39 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.384 09:23:39 version -- scripts/common.sh@355 -- # echo 1 00:06:12.384 09:23:39 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.384 09:23:39 version -- scripts/common.sh@366 -- # decimal 2 00:06:12.384 09:23:39 version -- scripts/common.sh@353 -- # local d=2 00:06:12.384 09:23:39 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.384 09:23:39 version -- scripts/common.sh@355 -- # echo 2 00:06:12.384 09:23:39 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.384 09:23:39 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.385 09:23:39 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.385 09:23:39 version -- scripts/common.sh@368 -- # return 0 00:06:12.385 09:23:39 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.385 09:23:39 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:12.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.385 --rc genhtml_branch_coverage=1 00:06:12.385 --rc genhtml_function_coverage=1 00:06:12.385 --rc genhtml_legend=1 00:06:12.385 --rc geninfo_all_blocks=1 00:06:12.385 --rc geninfo_unexecuted_blocks=1 00:06:12.385 00:06:12.385 ' 00:06:12.385 09:23:39 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:12.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.385 --rc genhtml_branch_coverage=1 00:06:12.385 --rc genhtml_function_coverage=1 00:06:12.385 --rc genhtml_legend=1 00:06:12.385 --rc geninfo_all_blocks=1 00:06:12.385 --rc geninfo_unexecuted_blocks=1 00:06:12.385 00:06:12.385 ' 00:06:12.385 09:23:39 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:12.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.385 --rc genhtml_branch_coverage=1 00:06:12.385 --rc genhtml_function_coverage=1 00:06:12.385 --rc genhtml_legend=1 00:06:12.385 --rc geninfo_all_blocks=1 00:06:12.385 --rc geninfo_unexecuted_blocks=1 00:06:12.385 00:06:12.385 ' 00:06:12.385 09:23:39 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:12.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.385 --rc genhtml_branch_coverage=1 00:06:12.385 --rc genhtml_function_coverage=1 00:06:12.385 --rc genhtml_legend=1 00:06:12.385 --rc geninfo_all_blocks=1 00:06:12.385 --rc geninfo_unexecuted_blocks=1 00:06:12.385 00:06:12.385 ' 00:06:12.385 09:23:39 version -- app/version.sh@17 -- # get_header_version major 00:06:12.385 09:23:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:12.385 09:23:39 version -- app/version.sh@14 -- # cut -f2 00:06:12.385 09:23:39 version -- app/version.sh@14 -- # tr -d '"' 00:06:12.385 09:23:39 version -- app/version.sh@17 -- # major=25 00:06:12.385 09:23:39 version -- app/version.sh@18 -- # get_header_version minor 00:06:12.385 09:23:39 version -- app/version.sh@14 -- # cut -f2 00:06:12.385 09:23:39 version -- app/version.sh@14 -- # tr -d '"' 00:06:12.385 09:23:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:12.385 09:23:39 version -- app/version.sh@18 -- # minor=1 00:06:12.385 09:23:39 version -- app/version.sh@19 -- # get_header_version patch 00:06:12.385 09:23:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:12.385 09:23:39 version -- app/version.sh@14 -- # tr -d '"' 00:06:12.385 09:23:39 version -- app/version.sh@14 -- # cut -f2 00:06:12.385 09:23:39 version -- app/version.sh@19 -- # patch=0 00:06:12.385 09:23:39 version -- app/version.sh@20 -- # get_header_version suffix 00:06:12.385 09:23:39 version -- app/version.sh@14 -- # cut -f2 00:06:12.385 09:23:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:12.385 09:23:39 version -- app/version.sh@14 -- # tr -d '"' 00:06:12.385 09:23:39 version -- app/version.sh@20 -- # suffix=-pre 00:06:12.385 09:23:39 version -- app/version.sh@22 -- # version=25.1 00:06:12.385 09:23:39 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:12.385 09:23:39 version -- app/version.sh@28 -- # version=25.1rc0 00:06:12.385 09:23:39 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:12.385 09:23:39 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:12.385 09:23:39 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:12.385 09:23:39 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:12.385 ************************************ 00:06:12.385 END TEST version 00:06:12.385 ************************************ 00:06:12.385 00:06:12.385 real 0m0.184s 00:06:12.385 user 0m0.113s 00:06:12.385 sys 0m0.095s 00:06:12.385 09:23:39 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.385 09:23:39 version -- common/autotest_common.sh@10 -- # set +x 00:06:12.385 09:23:39 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:12.385 09:23:39 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:12.385 09:23:39 -- spdk/autotest.sh@194 -- # uname -s 00:06:12.385 09:23:39 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:12.385 09:23:39 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:12.385 09:23:39 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:12.385 09:23:39 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:12.385 09:23:39 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:12.385 09:23:39 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:12.385 09:23:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.385 09:23:39 -- common/autotest_common.sh@10 -- # set +x 00:06:12.385 ************************************ 00:06:12.385 START TEST blockdev_nvme 00:06:12.385 ************************************ 00:06:12.385 09:23:39 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:12.385 * Looking for test storage... 00:06:12.385 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:12.385 09:23:40 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:12.385 09:23:40 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:06:12.385 09:23:40 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:12.385 09:23:40 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:12.385 09:23:40 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.385 09:23:40 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.385 09:23:40 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.385 09:23:40 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.385 09:23:40 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.385 09:23:40 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.385 09:23:40 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.385 09:23:40 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.385 09:23:40 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.385 09:23:40 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.385 09:23:40 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.385 09:23:40 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:12.385 09:23:40 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:12.385 09:23:40 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.385 09:23:40 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.385 09:23:40 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:12.385 09:23:40 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:12.385 09:23:40 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.385 09:23:40 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:12.385 09:23:40 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.385 09:23:40 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:12.385 09:23:40 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:12.385 09:23:40 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.385 09:23:40 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:12.385 09:23:40 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.385 09:23:40 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.385 09:23:40 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.385 09:23:40 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:12.385 09:23:40 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.385 09:23:40 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:12.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.385 --rc genhtml_branch_coverage=1 00:06:12.385 --rc genhtml_function_coverage=1 00:06:12.385 --rc genhtml_legend=1 00:06:12.385 --rc geninfo_all_blocks=1 00:06:12.385 --rc geninfo_unexecuted_blocks=1 00:06:12.385 00:06:12.385 ' 00:06:12.385 09:23:40 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:12.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.385 --rc genhtml_branch_coverage=1 00:06:12.385 --rc genhtml_function_coverage=1 00:06:12.385 --rc genhtml_legend=1 00:06:12.385 --rc geninfo_all_blocks=1 00:06:12.385 --rc geninfo_unexecuted_blocks=1 00:06:12.385 00:06:12.385 ' 00:06:12.385 09:23:40 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:12.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.385 --rc genhtml_branch_coverage=1 00:06:12.385 --rc genhtml_function_coverage=1 00:06:12.385 --rc genhtml_legend=1 00:06:12.385 --rc geninfo_all_blocks=1 00:06:12.385 --rc geninfo_unexecuted_blocks=1 00:06:12.385 00:06:12.385 ' 00:06:12.385 09:23:40 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:12.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.385 --rc genhtml_branch_coverage=1 00:06:12.386 --rc genhtml_function_coverage=1 00:06:12.386 --rc genhtml_legend=1 00:06:12.386 --rc geninfo_all_blocks=1 00:06:12.386 --rc geninfo_unexecuted_blocks=1 00:06:12.386 00:06:12.386 ' 00:06:12.386 09:23:40 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:12.386 09:23:40 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:12.386 09:23:40 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:12.386 09:23:40 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:12.386 09:23:40 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:12.386 09:23:40 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:12.386 09:23:40 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:12.386 09:23:40 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:12.386 09:23:40 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:12.386 09:23:40 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:06:12.386 09:23:40 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:06:12.386 09:23:40 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:06:12.386 09:23:40 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:06:12.386 09:23:40 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:06:12.386 09:23:40 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:06:12.386 09:23:40 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:06:12.386 09:23:40 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:06:12.386 09:23:40 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:06:12.386 09:23:40 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:06:12.386 09:23:40 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:06:12.386 09:23:40 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:06:12.386 09:23:40 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:06:12.386 09:23:40 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:06:12.386 09:23:40 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:06:12.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.386 09:23:40 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73333 00:06:12.386 09:23:40 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:12.386 09:23:40 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 73333 00:06:12.386 09:23:40 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:12.386 09:23:40 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 73333 ']' 00:06:12.386 09:23:40 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.386 09:23:40 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.386 09:23:40 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.386 09:23:40 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.386 09:23:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:12.645 [2024-11-29 09:23:40.167054] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:12.645 [2024-11-29 09:23:40.167150] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73333 ] 00:06:12.645 [2024-11-29 09:23:40.294272] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:12.645 [2024-11-29 09:23:40.325268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.645 [2024-11-29 09:23:40.349314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.580 09:23:41 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.580 09:23:41 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:06:13.580 09:23:41 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:06:13.580 09:23:41 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:06:13.580 09:23:41 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:13.580 09:23:41 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:13.580 09:23:41 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:13.580 09:23:41 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:13.580 09:23:41 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.580 09:23:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:13.839 09:23:41 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.839 09:23:41 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:06:13.839 09:23:41 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.839 09:23:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:13.839 09:23:41 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.839 09:23:41 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:06:13.839 09:23:41 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:06:13.839 09:23:41 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.839 09:23:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:13.839 09:23:41 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.839 09:23:41 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:06:13.839 09:23:41 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.839 09:23:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:13.839 09:23:41 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.839 09:23:41 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:13.839 09:23:41 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.839 09:23:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:13.839 09:23:41 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.839 09:23:41 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:06:13.839 09:23:41 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:06:13.839 09:23:41 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:06:13.839 09:23:41 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.839 09:23:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:13.839 09:23:41 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.839 09:23:41 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:06:13.839 09:23:41 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:06:13.840 09:23:41 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "da3064a0-f38c-4129-a9a8-46c079108625"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "da3064a0-f38c-4129-a9a8-46c079108625",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "df74371e-f430-4087-a4de-a770a0ce7250"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "df74371e-f430-4087-a4de-a770a0ce7250",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "5393bd69-eb3b-4b12-9311-e890e960bdba"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5393bd69-eb3b-4b12-9311-e890e960bdba",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "1cfd184f-68c5-4a50-9ab8-cd69e27e6d6b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1cfd184f-68c5-4a50-9ab8-cd69e27e6d6b",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "1636bd47-2ee8-42a3-b6a3-4c6967dfb7a3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1636bd47-2ee8-42a3-b6a3-4c6967dfb7a3",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "e4aba408-c0ba-4c09-961b-b3e7774428e0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "e4aba408-c0ba-4c09-961b-b3e7774428e0",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:13.840 09:23:41 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:06:13.840 09:23:41 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:06:13.840 09:23:41 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:06:13.840 09:23:41 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 73333 00:06:13.840 09:23:41 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 73333 ']' 00:06:13.840 09:23:41 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 73333 00:06:13.840 09:23:41 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:06:13.840 09:23:41 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.840 09:23:41 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73333 00:06:13.840 killing process with pid 73333 00:06:13.840 09:23:41 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:13.840 09:23:41 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:13.840 09:23:41 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73333' 00:06:13.840 09:23:41 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 73333 00:06:13.840 09:23:41 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 73333 00:06:14.406 09:23:41 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:14.406 09:23:41 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:14.406 09:23:41 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:14.406 09:23:41 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.406 09:23:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:14.406 ************************************ 00:06:14.406 START TEST bdev_hello_world 00:06:14.406 ************************************ 00:06:14.406 09:23:41 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:14.406 [2024-11-29 09:23:41.903869] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:14.406 [2024-11-29 09:23:41.903993] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73395 ] 00:06:14.406 [2024-11-29 09:23:42.036043] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:14.406 [2024-11-29 09:23:42.065949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.406 [2024-11-29 09:23:42.089867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.974 [2024-11-29 09:23:42.476143] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:14.974 [2024-11-29 09:23:42.476194] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:14.974 [2024-11-29 09:23:42.476213] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:14.974 [2024-11-29 09:23:42.478428] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:14.974 [2024-11-29 09:23:42.478929] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:14.974 [2024-11-29 09:23:42.478961] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:14.974 [2024-11-29 09:23:42.479176] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:14.974 00:06:14.974 [2024-11-29 09:23:42.479196] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:14.974 00:06:14.974 real 0m0.812s 00:06:14.974 user 0m0.532s 00:06:14.974 sys 0m0.178s 00:06:14.974 ************************************ 00:06:14.974 END TEST bdev_hello_world 00:06:14.974 ************************************ 00:06:14.974 09:23:42 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.974 09:23:42 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:14.974 09:23:42 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:06:14.974 09:23:42 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:14.974 09:23:42 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.974 09:23:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:14.974 ************************************ 00:06:14.974 START TEST bdev_bounds 00:06:14.974 ************************************ 00:06:14.974 Process bdevio pid: 73426 00:06:14.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.974 09:23:42 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:14.974 09:23:42 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=73426 00:06:14.974 09:23:42 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:14.974 09:23:42 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 73426' 00:06:14.974 09:23:42 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 73426 00:06:14.974 09:23:42 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 73426 ']' 00:06:14.974 09:23:42 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.974 09:23:42 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.974 09:23:42 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.974 09:23:42 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.974 09:23:42 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:14.974 09:23:42 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:15.232 [2024-11-29 09:23:42.754677] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:15.232 [2024-11-29 09:23:42.754956] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73426 ] 00:06:15.232 [2024-11-29 09:23:42.887990] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:15.232 [2024-11-29 09:23:42.919328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:15.232 [2024-11-29 09:23:42.945399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.232 [2024-11-29 09:23:42.945640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:15.232 [2024-11-29 09:23:42.945678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.167 09:23:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.167 09:23:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:16.167 09:23:43 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:16.167 I/O targets: 00:06:16.167 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:16.167 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:16.167 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:16.167 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:16.167 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:16.167 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:16.167 00:06:16.167 00:06:16.167 CUnit - A unit testing framework for C - Version 2.1-3 00:06:16.167 http://cunit.sourceforge.net/ 00:06:16.167 00:06:16.167 00:06:16.167 Suite: bdevio tests on: Nvme3n1 00:06:16.167 Test: blockdev write read block ...passed 00:06:16.167 Test: blockdev write zeroes read block ...passed 00:06:16.167 Test: blockdev write zeroes read no split ...passed 00:06:16.167 Test: blockdev write zeroes read split ...passed 00:06:16.167 Test: blockdev write zeroes read split partial ...passed 00:06:16.167 Test: blockdev reset ...[2024-11-29 09:23:43.685225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:16.167 [2024-11-29 09:23:43.687279] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller spassed 00:06:16.167 Test: blockdev write read 8 blocks ...uccessful. 00:06:16.167 passed 00:06:16.167 Test: blockdev write read size > 128k ...passed 00:06:16.167 Test: blockdev write read invalid size ...passed 00:06:16.167 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:16.167 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:16.167 Test: blockdev write read max offset ...passed 00:06:16.167 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:16.167 Test: blockdev writev readv 8 blocks ...passed 00:06:16.167 Test: blockdev writev readv 30 x 1block ...passed 00:06:16.167 Test: blockdev writev readv block ...passed 00:06:16.167 Test: blockdev writev readv size > 128k ...passed 00:06:16.167 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:16.167 Test: blockdev comparev and writev ...[2024-11-29 09:23:43.692786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2abe0e000 len:0x1000 00:06:16.167 [2024-11-29 09:23:43.692838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:16.167 passed 00:06:16.167 Test: blockdev nvme passthru rw ...passed 00:06:16.167 Test: blockdev nvme passthru vendor specific ...passed 00:06:16.167 Test: blockdev nvme admin passthru ...[2024-11-29 09:23:43.693410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:16.167 [2024-11-29 09:23:43.693443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:16.167 passed 00:06:16.167 Test: blockdev copy ...passed 00:06:16.167 Suite: bdevio tests on: Nvme2n3 00:06:16.168 Test: blockdev write read block ...passed 00:06:16.168 Test: blockdev write zeroes read block ...passed 00:06:16.168 Test: blockdev write zeroes read no split ...passed 00:06:16.168 Test: blockdev write zeroes read split ...passed 00:06:16.168 Test: blockdev write zeroes read split partial ...passed 00:06:16.168 Test: blockdev reset ...[2024-11-29 09:23:43.706164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:16.168 passed 00:06:16.168 Test: blockdev write read 8 blocks ...[2024-11-29 09:23:43.708572] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:16.168 passed 00:06:16.168 Test: blockdev write read size > 128k ...passed 00:06:16.168 Test: blockdev write read invalid size ...passed 00:06:16.168 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:16.168 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:16.168 Test: blockdev write read max offset ...passed 00:06:16.168 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:16.168 Test: blockdev writev readv 8 blocks ...passed 00:06:16.168 Test: blockdev writev readv 30 x 1block ...passed 00:06:16.168 Test: blockdev writev readv block ...passed 00:06:16.168 Test: blockdev writev readv size > 128k ...passed 00:06:16.168 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:16.168 Test: blockdev comparev and writev ...[2024-11-29 09:23:43.713839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2abe06000 len:0x1000 00:06:16.168 [2024-11-29 09:23:43.713880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:16.168 passed 00:06:16.168 Test: blockdev nvme passthru rw ...passed 00:06:16.168 Test: blockdev nvme passthru vendor specific ...passed 00:06:16.168 Test: blockdev nvme admin passthru ...[2024-11-29 09:23:43.714439] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:16.168 [2024-11-29 09:23:43.714462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:16.168 passed 00:06:16.168 Test: blockdev copy ...passed 00:06:16.168 Suite: bdevio tests on: Nvme2n2 00:06:16.168 Test: blockdev write read block ...passed 00:06:16.168 Test: blockdev write zeroes read block ...passed 00:06:16.168 Test: blockdev write zeroes read no split ...passed 00:06:16.168 Test: blockdev write zeroes read split ...passed 00:06:16.168 Test: blockdev write zeroes read split partial ...passed 00:06:16.168 Test: blockdev reset ...[2024-11-29 09:23:43.727447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:16.168 [2024-11-29 09:23:43.729220] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:06:16.168 Test: blockdev write read 8 blocks ...passed 00:06:16.168 Test: blockdev write read size > 128k ...uccessful. 00:06:16.168 passed 00:06:16.168 Test: blockdev write read invalid size ...passed 00:06:16.168 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:16.168 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:16.168 Test: blockdev write read max offset ...passed 00:06:16.168 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:16.168 Test: blockdev writev readv 8 blocks ...passed 00:06:16.168 Test: blockdev writev readv 30 x 1block ...passed 00:06:16.168 Test: blockdev writev readv block ...passed 00:06:16.168 Test: blockdev writev readv size > 128k ...passed 00:06:16.168 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:16.168 Test: blockdev comparev and writev ...[2024-11-29 09:23:43.733725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2abe08000 len:0x1000 00:06:16.168 [2024-11-29 09:23:43.733762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:16.168 passed 00:06:16.168 Test: blockdev nvme passthru rw ...passed 00:06:16.168 Test: blockdev nvme passthru vendor specific ...passed 00:06:16.168 Test: blockdev nvme admin passthru ...[2024-11-29 09:23:43.734410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:16.168 [2024-11-29 09:23:43.734437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:16.168 passed 00:06:16.168 Test: blockdev copy ...passed 00:06:16.168 Suite: bdevio tests on: Nvme2n1 00:06:16.168 Test: blockdev write read block ...passed 00:06:16.168 Test: blockdev write zeroes read block ...passed 00:06:16.168 Test: blockdev write zeroes read no split ...passed 00:06:16.168 Test: blockdev write zeroes read split ...passed 00:06:16.168 Test: blockdev write zeroes read split partial ...passed 00:06:16.168 Test: blockdev reset ...[2024-11-29 09:23:43.747924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:16.168 passed 00:06:16.168 Test: blockdev write read 8 blocks ...[2024-11-29 09:23:43.749925] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:16.168 passed 00:06:16.168 Test: blockdev write read size > 128k ...passed 00:06:16.168 Test: blockdev write read invalid size ...passed 00:06:16.168 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:16.168 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:16.168 Test: blockdev write read max offset ...passed 00:06:16.168 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:16.168 Test: blockdev writev readv 8 blocks ...passed 00:06:16.168 Test: blockdev writev readv 30 x 1block ...passed 00:06:16.168 Test: blockdev writev readv block ...passed 00:06:16.168 Test: blockdev writev readv size > 128k ...passed 00:06:16.168 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:16.168 Test: blockdev comparev and writev ...[2024-11-29 09:23:43.755286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:06:16.168 Test: blockdev nvme passthru rw ...passed 00:06:16.168 Test: blockdev nvme passthru vendor specific ...SGL DATA BLOCK ADDRESS 0x2aba04000 len:0x1000 00:06:16.168 [2024-11-29 09:23:43.755414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:16.168 passed 00:06:16.168 Test: blockdev nvme admin passthru ...[2024-11-29 09:23:43.756039] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:16.168 [2024-11-29 09:23:43.756059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:16.168 passed 00:06:16.168 Test: blockdev copy ...passed 00:06:16.168 Suite: bdevio tests on: Nvme1n1 00:06:16.168 Test: blockdev write read block ...passed 00:06:16.168 Test: blockdev write zeroes read block ...passed 00:06:16.168 Test: blockdev write zeroes read no split ...passed 00:06:16.168 Test: blockdev write zeroes read split ...passed 00:06:16.168 Test: blockdev write zeroes read split partial ...passed 00:06:16.168 Test: blockdev reset ...[2024-11-29 09:23:43.768799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:16.168 passed 00:06:16.168 Test: blockdev write read 8 blocks ...[2024-11-29 09:23:43.770526] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:16.168 passed 00:06:16.168 Test: blockdev write read size > 128k ...passed 00:06:16.168 Test: blockdev write read invalid size ...passed 00:06:16.168 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:16.168 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:16.168 Test: blockdev write read max offset ...passed 00:06:16.168 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:16.168 Test: blockdev writev readv 8 blocks ...passed 00:06:16.168 Test: blockdev writev readv 30 x 1block ...passed 00:06:16.168 Test: blockdev writev readv block ...passed 00:06:16.168 Test: blockdev writev readv size > 128k ...passed 00:06:16.168 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:16.168 Test: blockdev comparev and writev ...[2024-11-29 09:23:43.775008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e623d000 len:0x1000 00:06:16.168 [2024-11-29 09:23:43.775042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:16.168 passed 00:06:16.168 Test: blockdev nvme passthru rw ...passed 00:06:16.168 Test: blockdev nvme passthru vendor specific ...passed 00:06:16.168 Test: blockdev nvme admin passthru ...[2024-11-29 09:23:43.775598] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:16.168 [2024-11-29 09:23:43.775622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:16.168 passed 00:06:16.169 Test: blockdev copy ...passed 00:06:16.169 Suite: bdevio tests on: Nvme0n1 00:06:16.169 Test: blockdev write read block ...passed 00:06:16.169 Test: blockdev write zeroes read block ...passed 00:06:16.169 Test: blockdev write zeroes read no split ...passed 00:06:16.169 Test: blockdev write zeroes read split ...passed 00:06:16.169 Test: blockdev write zeroes read split partial ...passed 00:06:16.169 Test: blockdev reset ...[2024-11-29 09:23:43.789682] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:16.169 passed 00:06:16.169 Test: blockdev write read 8 blocks ...[2024-11-29 09:23:43.791426] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:16.169 passed 00:06:16.169 Test: blockdev write read size > 128k ...passed 00:06:16.169 Test: blockdev write read invalid size ...passed 00:06:16.169 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:16.169 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:16.169 Test: blockdev write read max offset ...passed 00:06:16.169 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:16.169 Test: blockdev writev readv 8 blocks ...passed 00:06:16.169 Test: blockdev writev readv 30 x 1block ...passed 00:06:16.169 Test: blockdev writev readv block ...passed 00:06:16.169 Test: blockdev writev readv size > 128k ...passed 00:06:16.169 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:16.169 Test: blockdev comparev and writev ...[2024-11-29 09:23:43.795304] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:16.169 separate metadata which is not supported yet. 00:06:16.169 passed 00:06:16.169 Test: blockdev nvme passthru rw ...passed 00:06:16.169 Test: blockdev nvme passthru vendor specific ...passed 00:06:16.169 Test: blockdev nvme admin passthru ...[2024-11-29 09:23:43.795887] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:16.169 [2024-11-29 09:23:43.795919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:16.169 passed 00:06:16.169 Test: blockdev copy ...passed 00:06:16.169 00:06:16.169 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.169 suites 6 6 n/a 0 0 00:06:16.169 tests 138 138 138 0 0 00:06:16.169 asserts 893 893 893 0 n/a 00:06:16.169 00:06:16.169 Elapsed time = 0.290 seconds 00:06:16.169 0 00:06:16.169 09:23:43 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 73426 00:06:16.169 09:23:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 73426 ']' 00:06:16.169 09:23:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 73426 00:06:16.169 09:23:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:16.169 09:23:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:16.169 09:23:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73426 00:06:16.169 09:23:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:16.169 09:23:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:16.169 09:23:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73426' 00:06:16.169 killing process with pid 73426 00:06:16.169 09:23:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 73426 00:06:16.169 09:23:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 73426 00:06:16.428 09:23:44 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:16.428 00:06:16.428 real 0m1.324s 00:06:16.428 user 0m3.328s 00:06:16.428 sys 0m0.273s 00:06:16.428 09:23:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.428 09:23:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:16.428 ************************************ 00:06:16.428 END TEST bdev_bounds 00:06:16.428 ************************************ 00:06:16.428 09:23:44 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:16.428 09:23:44 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:16.428 09:23:44 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.428 09:23:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:16.428 ************************************ 00:06:16.428 START TEST bdev_nbd 00:06:16.428 ************************************ 00:06:16.428 09:23:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:16.428 09:23:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:16.428 09:23:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:16.428 09:23:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.428 09:23:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:16.428 09:23:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:16.428 09:23:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:16.428 09:23:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:16.428 09:23:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:16.428 09:23:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:16.428 09:23:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:16.428 09:23:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:16.428 09:23:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:16.428 09:23:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:16.428 09:23:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:16.428 09:23:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:16.428 09:23:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=73475 00:06:16.428 09:23:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:16.428 09:23:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 73475 /var/tmp/spdk-nbd.sock 00:06:16.428 09:23:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 73475 ']' 00:06:16.428 09:23:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:16.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:16.428 09:23:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.428 09:23:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:16.428 09:23:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.428 09:23:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:16.428 09:23:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:16.428 [2024-11-29 09:23:44.123445] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:16.428 [2024-11-29 09:23:44.123704] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:16.686 [2024-11-29 09:23:44.256450] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:16.686 [2024-11-29 09:23:44.287180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.686 [2024-11-29 09:23:44.311642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.330 09:23:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.330 09:23:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:17.330 09:23:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:17.330 09:23:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.330 09:23:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:17.330 09:23:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:17.330 09:23:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:17.330 09:23:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.330 09:23:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:17.330 09:23:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:17.330 09:23:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:17.330 09:23:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:17.330 09:23:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:17.330 09:23:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:17.330 09:23:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:17.588 09:23:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:17.588 09:23:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:17.588 09:23:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:17.588 09:23:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:17.588 09:23:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:17.588 09:23:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:17.588 09:23:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:17.588 09:23:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:17.588 09:23:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:17.588 09:23:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:17.588 09:23:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:17.588 09:23:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:17.588 1+0 records in 00:06:17.588 1+0 records out 00:06:17.588 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000538125 s, 7.6 MB/s 00:06:17.588 09:23:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:17.588 09:23:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:17.588 09:23:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:17.588 09:23:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:17.588 09:23:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:17.588 09:23:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:17.588 09:23:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:17.588 09:23:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:17.845 09:23:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:17.845 09:23:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:17.845 09:23:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:17.845 09:23:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:17.845 09:23:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:17.845 09:23:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:17.845 09:23:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:17.845 09:23:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:17.845 09:23:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:17.845 09:23:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:17.845 09:23:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:17.845 09:23:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:17.845 1+0 records in 00:06:17.845 1+0 records out 00:06:17.845 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000357863 s, 11.4 MB/s 00:06:17.845 09:23:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:17.845 09:23:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:17.845 09:23:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:17.845 09:23:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:17.845 09:23:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:17.845 09:23:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:17.845 09:23:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:17.845 09:23:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:18.103 09:23:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:18.103 09:23:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:18.103 09:23:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:18.103 09:23:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:18.103 09:23:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:18.103 09:23:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.103 09:23:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.103 09:23:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:18.103 09:23:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:18.103 09:23:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.103 09:23:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.103 09:23:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:18.103 1+0 records in 00:06:18.103 1+0 records out 00:06:18.103 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000440295 s, 9.3 MB/s 00:06:18.103 09:23:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.103 09:23:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:18.103 09:23:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.103 09:23:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:18.103 09:23:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:18.103 09:23:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:18.103 09:23:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:18.103 09:23:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:18.360 09:23:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:18.360 09:23:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:18.360 09:23:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:18.361 09:23:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:18.361 09:23:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:18.361 09:23:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.361 09:23:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.361 09:23:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:18.361 09:23:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:18.361 09:23:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.361 09:23:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.361 09:23:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:18.361 1+0 records in 00:06:18.361 1+0 records out 00:06:18.361 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000789415 s, 5.2 MB/s 00:06:18.361 09:23:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.361 09:23:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:18.361 09:23:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.361 09:23:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:18.361 09:23:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:18.361 09:23:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:18.361 09:23:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:18.361 09:23:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:18.620 09:23:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:18.620 09:23:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:18.620 09:23:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:18.620 09:23:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:18.620 09:23:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:18.620 09:23:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.620 09:23:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.620 09:23:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:18.620 09:23:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:18.620 09:23:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.620 09:23:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.620 09:23:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:18.620 1+0 records in 00:06:18.620 1+0 records out 00:06:18.620 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00045735 s, 9.0 MB/s 00:06:18.620 09:23:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.620 09:23:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:18.620 09:23:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.620 09:23:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:18.620 09:23:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:18.620 09:23:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:18.620 09:23:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:18.620 09:23:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:18.620 09:23:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:18.620 09:23:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:18.620 09:23:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:18.620 09:23:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:18.620 09:23:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:18.620 09:23:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.620 09:23:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.620 09:23:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:18.878 09:23:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:18.878 09:23:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.878 09:23:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.878 09:23:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:18.878 1+0 records in 00:06:18.878 1+0 records out 00:06:18.878 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361912 s, 11.3 MB/s 00:06:18.878 09:23:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.878 09:23:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:18.878 09:23:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.878 09:23:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:18.878 09:23:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:18.878 09:23:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:18.878 09:23:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:18.878 09:23:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:18.878 09:23:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:18.878 { 00:06:18.878 "nbd_device": "/dev/nbd0", 00:06:18.878 "bdev_name": "Nvme0n1" 00:06:18.878 }, 00:06:18.878 { 00:06:18.878 "nbd_device": "/dev/nbd1", 00:06:18.878 "bdev_name": "Nvme1n1" 00:06:18.878 }, 00:06:18.878 { 00:06:18.878 "nbd_device": "/dev/nbd2", 00:06:18.878 "bdev_name": "Nvme2n1" 00:06:18.878 }, 00:06:18.878 { 00:06:18.878 "nbd_device": "/dev/nbd3", 00:06:18.878 "bdev_name": "Nvme2n2" 00:06:18.878 }, 00:06:18.878 { 00:06:18.878 "nbd_device": "/dev/nbd4", 00:06:18.878 "bdev_name": "Nvme2n3" 00:06:18.878 }, 00:06:18.878 { 00:06:18.878 "nbd_device": "/dev/nbd5", 00:06:18.878 "bdev_name": "Nvme3n1" 00:06:18.878 } 00:06:18.878 ]' 00:06:18.878 09:23:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:18.878 09:23:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:18.878 { 00:06:18.878 "nbd_device": "/dev/nbd0", 00:06:18.878 "bdev_name": "Nvme0n1" 00:06:18.878 }, 00:06:18.878 { 00:06:18.878 "nbd_device": "/dev/nbd1", 00:06:18.878 "bdev_name": "Nvme1n1" 00:06:18.878 }, 00:06:18.878 { 00:06:18.878 "nbd_device": "/dev/nbd2", 00:06:18.878 "bdev_name": "Nvme2n1" 00:06:18.878 }, 00:06:18.878 { 00:06:18.878 "nbd_device": "/dev/nbd3", 00:06:18.878 "bdev_name": "Nvme2n2" 00:06:18.878 }, 00:06:18.878 { 00:06:18.878 "nbd_device": "/dev/nbd4", 00:06:18.878 "bdev_name": "Nvme2n3" 00:06:18.878 }, 00:06:18.878 { 00:06:18.878 "nbd_device": "/dev/nbd5", 00:06:18.878 "bdev_name": "Nvme3n1" 00:06:18.878 } 00:06:18.878 ]' 00:06:18.878 09:23:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:19.137 09:23:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:19.137 09:23:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.137 09:23:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:19.137 09:23:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:19.137 09:23:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:19.137 09:23:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.137 09:23:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:19.137 09:23:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:19.137 09:23:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:19.137 09:23:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:19.137 09:23:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.137 09:23:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.137 09:23:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:19.137 09:23:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:19.137 09:23:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.137 09:23:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.137 09:23:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:19.394 09:23:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:19.394 09:23:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:19.394 09:23:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:19.394 09:23:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.394 09:23:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.394 09:23:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:19.394 09:23:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:19.394 09:23:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.394 09:23:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.394 09:23:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:19.653 09:23:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:19.653 09:23:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:19.653 09:23:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:19.653 09:23:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.653 09:23:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.653 09:23:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:19.653 09:23:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:19.653 09:23:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.653 09:23:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.653 09:23:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:19.910 09:23:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:19.910 09:23:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:19.910 09:23:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:19.911 09:23:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.911 09:23:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.911 09:23:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:19.911 09:23:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:19.911 09:23:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.911 09:23:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.911 09:23:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:20.168 09:23:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:20.168 09:23:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:20.168 09:23:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:20.168 09:23:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.168 09:23:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.168 09:23:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:20.168 09:23:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:20.168 09:23:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.168 09:23:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.168 09:23:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:20.168 09:23:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:20.168 09:23:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:20.168 09:23:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:20.168 09:23:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.168 09:23:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.168 09:23:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:20.168 09:23:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:20.168 09:23:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.168 09:23:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:20.168 09:23:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.426 09:23:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:20.426 09:23:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:20.426 09:23:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:20.426 09:23:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:20.426 09:23:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:20.426 09:23:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:20.426 09:23:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:20.426 09:23:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:20.426 09:23:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:20.426 09:23:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:20.426 09:23:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:20.426 09:23:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:20.426 09:23:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:20.426 09:23:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:20.426 09:23:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.426 09:23:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:20.426 09:23:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:20.426 09:23:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:20.426 09:23:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:20.426 09:23:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:20.426 09:23:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.426 09:23:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:20.426 09:23:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:20.426 09:23:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:20.426 09:23:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:20.426 09:23:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:20.426 09:23:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:20.426 09:23:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:20.426 09:23:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:20.685 /dev/nbd0 00:06:20.685 09:23:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:20.685 09:23:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:20.685 09:23:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:20.685 09:23:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:20.685 09:23:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:20.685 09:23:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:20.685 09:23:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:20.685 09:23:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:20.685 09:23:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:20.685 09:23:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:20.685 09:23:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:20.685 1+0 records in 00:06:20.685 1+0 records out 00:06:20.685 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000546304 s, 7.5 MB/s 00:06:20.685 09:23:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:20.685 09:23:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:20.685 09:23:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:20.685 09:23:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:20.685 09:23:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:20.685 09:23:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:20.685 09:23:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:20.685 09:23:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:20.943 /dev/nbd1 00:06:20.943 09:23:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:20.943 09:23:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:20.943 09:23:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:20.943 09:23:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:20.943 09:23:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:20.943 09:23:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:20.943 09:23:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:20.943 09:23:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:20.943 09:23:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:20.943 09:23:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:20.943 09:23:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:20.943 1+0 records in 00:06:20.943 1+0 records out 00:06:20.943 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000411009 s, 10.0 MB/s 00:06:20.943 09:23:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:20.943 09:23:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:20.943 09:23:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:20.943 09:23:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:20.943 09:23:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:20.943 09:23:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:20.943 09:23:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:20.943 09:23:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:21.199 /dev/nbd10 00:06:21.199 09:23:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:21.199 09:23:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:21.199 09:23:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:21.199 09:23:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:21.199 09:23:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:21.199 09:23:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:21.199 09:23:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:21.199 09:23:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:21.199 09:23:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:21.199 09:23:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:21.199 09:23:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:21.199 1+0 records in 00:06:21.199 1+0 records out 00:06:21.199 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000383869 s, 10.7 MB/s 00:06:21.200 09:23:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:21.200 09:23:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:21.200 09:23:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:21.200 09:23:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:21.200 09:23:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:21.200 09:23:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:21.200 09:23:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:21.200 09:23:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:21.458 /dev/nbd11 00:06:21.458 09:23:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:21.458 09:23:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:21.458 09:23:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:21.458 09:23:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:21.458 09:23:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:21.458 09:23:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:21.458 09:23:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:21.458 09:23:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:21.458 09:23:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:21.458 09:23:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:21.458 09:23:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:21.458 1+0 records in 00:06:21.458 1+0 records out 00:06:21.458 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000400621 s, 10.2 MB/s 00:06:21.458 09:23:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:21.458 09:23:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:21.458 09:23:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:21.458 09:23:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:21.458 09:23:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:21.458 09:23:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:21.458 09:23:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:21.458 09:23:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:21.716 /dev/nbd12 00:06:21.716 09:23:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:21.716 09:23:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:21.717 09:23:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:21.717 09:23:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:21.717 09:23:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:21.717 09:23:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:21.717 09:23:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:21.717 09:23:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:21.717 09:23:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:21.717 09:23:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:21.717 09:23:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:21.717 1+0 records in 00:06:21.717 1+0 records out 00:06:21.717 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000452866 s, 9.0 MB/s 00:06:21.717 09:23:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:21.717 09:23:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:21.717 09:23:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:21.717 09:23:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:21.717 09:23:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:21.717 09:23:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:21.717 09:23:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:21.717 09:23:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:21.973 /dev/nbd13 00:06:21.974 09:23:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:21.974 09:23:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:21.974 09:23:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:21.974 09:23:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:21.974 09:23:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:21.974 09:23:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:21.974 09:23:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:21.974 09:23:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:21.974 09:23:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:21.974 09:23:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:21.974 09:23:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:21.974 1+0 records in 00:06:21.974 1+0 records out 00:06:21.974 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000447659 s, 9.1 MB/s 00:06:21.974 09:23:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:21.974 09:23:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:21.974 09:23:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:21.974 09:23:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:21.974 09:23:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:21.974 09:23:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:21.974 09:23:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:21.974 09:23:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:21.974 09:23:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.974 09:23:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:22.232 09:23:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:22.232 { 00:06:22.232 "nbd_device": "/dev/nbd0", 00:06:22.232 "bdev_name": "Nvme0n1" 00:06:22.232 }, 00:06:22.232 { 00:06:22.232 "nbd_device": "/dev/nbd1", 00:06:22.232 "bdev_name": "Nvme1n1" 00:06:22.232 }, 00:06:22.232 { 00:06:22.232 "nbd_device": "/dev/nbd10", 00:06:22.232 "bdev_name": "Nvme2n1" 00:06:22.232 }, 00:06:22.232 { 00:06:22.232 "nbd_device": "/dev/nbd11", 00:06:22.232 "bdev_name": "Nvme2n2" 00:06:22.232 }, 00:06:22.232 { 00:06:22.232 "nbd_device": "/dev/nbd12", 00:06:22.232 "bdev_name": "Nvme2n3" 00:06:22.232 }, 00:06:22.232 { 00:06:22.232 "nbd_device": "/dev/nbd13", 00:06:22.232 "bdev_name": "Nvme3n1" 00:06:22.232 } 00:06:22.232 ]' 00:06:22.232 09:23:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:22.232 { 00:06:22.232 "nbd_device": "/dev/nbd0", 00:06:22.232 "bdev_name": "Nvme0n1" 00:06:22.232 }, 00:06:22.232 { 00:06:22.232 "nbd_device": "/dev/nbd1", 00:06:22.232 "bdev_name": "Nvme1n1" 00:06:22.232 }, 00:06:22.232 { 00:06:22.232 "nbd_device": "/dev/nbd10", 00:06:22.232 "bdev_name": "Nvme2n1" 00:06:22.232 }, 00:06:22.232 { 00:06:22.232 "nbd_device": "/dev/nbd11", 00:06:22.232 "bdev_name": "Nvme2n2" 00:06:22.232 }, 00:06:22.232 { 00:06:22.232 "nbd_device": "/dev/nbd12", 00:06:22.232 "bdev_name": "Nvme2n3" 00:06:22.232 }, 00:06:22.232 { 00:06:22.232 "nbd_device": "/dev/nbd13", 00:06:22.232 "bdev_name": "Nvme3n1" 00:06:22.232 } 00:06:22.232 ]' 00:06:22.232 09:23:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:22.232 09:23:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:22.232 /dev/nbd1 00:06:22.232 /dev/nbd10 00:06:22.232 /dev/nbd11 00:06:22.232 /dev/nbd12 00:06:22.232 /dev/nbd13' 00:06:22.232 09:23:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:22.232 /dev/nbd1 00:06:22.232 /dev/nbd10 00:06:22.232 /dev/nbd11 00:06:22.232 /dev/nbd12 00:06:22.232 /dev/nbd13' 00:06:22.232 09:23:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:22.232 09:23:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:22.232 09:23:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:22.232 09:23:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:22.232 09:23:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:22.232 09:23:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:22.232 09:23:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:22.232 09:23:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:22.232 09:23:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:22.232 09:23:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:22.232 09:23:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:22.232 09:23:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:22.232 256+0 records in 00:06:22.232 256+0 records out 00:06:22.232 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00982549 s, 107 MB/s 00:06:22.232 09:23:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:22.232 09:23:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:22.232 256+0 records in 00:06:22.232 256+0 records out 00:06:22.232 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.056072 s, 18.7 MB/s 00:06:22.232 09:23:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:22.232 09:23:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:22.232 256+0 records in 00:06:22.232 256+0 records out 00:06:22.232 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0569136 s, 18.4 MB/s 00:06:22.232 09:23:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:22.232 09:23:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:22.490 256+0 records in 00:06:22.490 256+0 records out 00:06:22.490 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0563792 s, 18.6 MB/s 00:06:22.490 09:23:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:22.490 09:23:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:22.490 256+0 records in 00:06:22.490 256+0 records out 00:06:22.490 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0599681 s, 17.5 MB/s 00:06:22.490 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:22.490 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:22.490 256+0 records in 00:06:22.490 256+0 records out 00:06:22.490 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0560889 s, 18.7 MB/s 00:06:22.490 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:22.490 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:22.490 256+0 records in 00:06:22.490 256+0 records out 00:06:22.490 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0574051 s, 18.3 MB/s 00:06:22.490 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:22.490 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:22.490 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:22.490 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:22.490 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:22.490 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:22.490 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:22.490 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:22.490 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:22.490 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:22.490 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:22.490 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:22.490 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:22.490 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:22.490 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:22.490 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:22.490 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:22.491 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:22.491 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:22.491 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:22.491 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:22.491 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.491 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:22.491 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:22.491 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:22.491 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.491 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:22.777 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:22.777 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:22.777 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:22.777 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.777 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.777 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:22.777 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:22.777 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.777 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.777 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:23.034 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:23.034 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:23.034 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:23.034 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.034 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.034 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:23.034 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:23.034 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.034 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.034 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:23.292 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:23.292 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:23.292 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:23.292 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.292 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.292 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:23.292 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:23.292 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.292 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.292 09:23:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:23.549 09:23:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:23.550 09:23:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:23.550 09:23:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:23.550 09:23:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.550 09:23:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.550 09:23:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:23.550 09:23:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:23.550 09:23:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.550 09:23:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.550 09:23:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:23.550 09:23:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:23.550 09:23:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:23.550 09:23:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:23.550 09:23:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.550 09:23:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.550 09:23:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:23.550 09:23:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:23.550 09:23:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.550 09:23:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.550 09:23:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:23.807 09:23:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:23.807 09:23:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:23.807 09:23:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:23.807 09:23:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.807 09:23:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.807 09:23:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:23.807 09:23:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:23.807 09:23:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.807 09:23:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:23.807 09:23:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.807 09:23:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:24.064 09:23:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:24.064 09:23:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:24.064 09:23:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:24.064 09:23:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:24.064 09:23:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:24.064 09:23:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:24.064 09:23:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:24.064 09:23:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:24.064 09:23:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:24.064 09:23:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:24.064 09:23:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:24.064 09:23:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:24.064 09:23:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:24.064 09:23:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.064 09:23:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:24.064 09:23:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:24.322 malloc_lvol_verify 00:06:24.322 09:23:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:24.580 b8e2a3b7-a968-43c3-8725-f69ad33abfff 00:06:24.580 09:23:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:24.580 d3d6ea33-f8f5-44c0-9de6-e98358f3d0d8 00:06:24.580 09:23:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:24.841 /dev/nbd0 00:06:24.841 09:23:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:24.841 09:23:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:24.841 09:23:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:24.841 09:23:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:24.841 09:23:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:24.841 mke2fs 1.47.0 (5-Feb-2023) 00:06:24.841 Discarding device blocks: 0/4096 done 00:06:24.841 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:24.841 00:06:24.841 Allocating group tables: 0/1 done 00:06:24.841 Writing inode tables: 0/1 done 00:06:24.841 Creating journal (1024 blocks): done 00:06:24.841 Writing superblocks and filesystem accounting information: 0/1 done 00:06:24.841 00:06:24.841 09:23:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:24.841 09:23:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.841 09:23:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:24.841 09:23:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:24.841 09:23:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:24.841 09:23:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.841 09:23:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:25.102 09:23:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:25.102 09:23:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:25.102 09:23:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:25.102 09:23:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.102 09:23:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.102 09:23:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:25.102 09:23:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:25.102 09:23:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.102 09:23:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 73475 00:06:25.102 09:23:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 73475 ']' 00:06:25.102 09:23:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 73475 00:06:25.102 09:23:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:25.102 09:23:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:25.102 09:23:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73475 00:06:25.102 09:23:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:25.102 09:23:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:25.102 killing process with pid 73475 00:06:25.102 09:23:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73475' 00:06:25.102 09:23:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 73475 00:06:25.102 09:23:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 73475 00:06:25.364 09:23:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:25.364 00:06:25.364 real 0m8.915s 00:06:25.364 user 0m13.152s 00:06:25.364 sys 0m3.004s 00:06:25.364 ************************************ 00:06:25.364 END TEST bdev_nbd 00:06:25.364 ************************************ 00:06:25.364 09:23:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.364 09:23:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:25.364 skipping fio tests on NVMe due to multi-ns failures. 00:06:25.364 09:23:53 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:06:25.364 09:23:53 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:06:25.364 09:23:53 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:25.364 09:23:53 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:25.364 09:23:53 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:25.364 09:23:53 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:25.364 09:23:53 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.364 09:23:53 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:25.364 ************************************ 00:06:25.364 START TEST bdev_verify 00:06:25.364 ************************************ 00:06:25.364 09:23:53 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:25.624 [2024-11-29 09:23:53.091747] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:25.624 [2024-11-29 09:23:53.091870] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73837 ] 00:06:25.624 [2024-11-29 09:23:53.226639] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:25.624 [2024-11-29 09:23:53.257015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:25.624 [2024-11-29 09:23:53.282911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.624 [2024-11-29 09:23:53.282947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.195 Running I/O for 5 seconds... 00:06:28.527 16896.00 IOPS, 66.00 MiB/s [2024-11-29T09:23:57.194Z] 18496.00 IOPS, 72.25 MiB/s [2024-11-29T09:23:58.137Z] 19562.67 IOPS, 76.42 MiB/s [2024-11-29T09:23:59.076Z] 20480.00 IOPS, 80.00 MiB/s [2024-11-29T09:23:59.076Z] 20928.00 IOPS, 81.75 MiB/s 00:06:31.350 Latency(us) 00:06:31.350 [2024-11-29T09:23:59.076Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:31.350 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:31.350 Verification LBA range: start 0x0 length 0xbd0bd 00:06:31.350 Nvme0n1 : 5.08 1740.17 6.80 0.00 0.00 73347.44 15325.34 84692.68 00:06:31.350 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:31.350 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:31.350 Nvme0n1 : 5.08 1714.55 6.70 0.00 0.00 74464.93 14619.57 84289.38 00:06:31.350 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:31.350 Verification LBA range: start 0x0 length 0xa0000 00:06:31.350 Nvme1n1 : 5.08 1739.62 6.80 0.00 0.00 73253.77 17845.96 76626.71 00:06:31.350 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:31.350 Verification LBA range: start 0xa0000 length 0xa0000 00:06:31.350 Nvme1n1 : 5.08 1713.07 6.69 0.00 0.00 74372.79 18047.61 76626.71 00:06:31.350 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:31.350 Verification LBA range: start 0x0 length 0x80000 00:06:31.351 Nvme2n1 : 5.08 1737.47 6.79 0.00 0.00 73142.60 18854.20 75820.11 00:06:31.351 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:31.351 Verification LBA range: start 0x80000 length 0x80000 00:06:31.351 Nvme2n1 : 5.08 1712.62 6.69 0.00 0.00 74226.25 16938.54 72593.72 00:06:31.351 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:31.351 Verification LBA range: start 0x0 length 0x80000 00:06:31.351 Nvme2n2 : 5.09 1736.52 6.78 0.00 0.00 73003.60 19358.33 72190.42 00:06:31.351 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:31.351 Verification LBA range: start 0x80000 length 0x80000 00:06:31.351 Nvme2n2 : 5.08 1712.15 6.69 0.00 0.00 74078.00 16232.76 72190.42 00:06:31.351 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:31.351 Verification LBA range: start 0x0 length 0x80000 00:06:31.351 Nvme2n3 : 5.09 1735.51 6.78 0.00 0.00 72872.39 17845.96 70173.93 00:06:31.351 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:31.351 Verification LBA range: start 0x80000 length 0x80000 00:06:31.351 Nvme2n3 : 5.09 1711.18 6.68 0.00 0.00 73943.17 16535.24 75416.81 00:06:31.351 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:31.351 Verification LBA range: start 0x0 length 0x20000 00:06:31.351 Nvme3n1 : 5.09 1734.50 6.78 0.00 0.00 72742.68 13510.50 76223.41 00:06:31.351 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:31.351 Verification LBA range: start 0x20000 length 0x20000 00:06:31.351 Nvme3n1 : 5.09 1710.19 6.68 0.00 0.00 73807.06 14317.10 75820.11 00:06:31.351 [2024-11-29T09:23:59.077Z] =================================================================================================================== 00:06:31.351 [2024-11-29T09:23:59.077Z] Total : 20697.57 80.85 0.00 0.00 73600.59 13510.50 84692.68 00:06:31.926 00:06:31.926 real 0m6.372s 00:06:31.926 user 0m11.958s 00:06:31.926 sys 0m0.235s 00:06:31.926 09:23:59 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.926 09:23:59 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:31.926 ************************************ 00:06:31.926 END TEST bdev_verify 00:06:31.926 ************************************ 00:06:31.926 09:23:59 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:31.926 09:23:59 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:31.926 09:23:59 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.926 09:23:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:31.926 ************************************ 00:06:31.926 START TEST bdev_verify_big_io 00:06:31.926 ************************************ 00:06:31.926 09:23:59 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:31.926 [2024-11-29 09:23:59.546961] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:31.926 [2024-11-29 09:23:59.547100] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73932 ] 00:06:32.188 [2024-11-29 09:23:59.684350] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:32.188 [2024-11-29 09:23:59.712261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:32.188 [2024-11-29 09:23:59.755406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.188 [2024-11-29 09:23:59.755454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.757 Running I/O for 5 seconds... 00:06:38.409 1734.00 IOPS, 108.38 MiB/s [2024-11-29T09:24:06.392Z] 2443.00 IOPS, 152.69 MiB/s [2024-11-29T09:24:06.958Z] 2943.00 IOPS, 183.94 MiB/s 00:06:39.232 Latency(us) 00:06:39.232 [2024-11-29T09:24:06.958Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:39.232 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:39.232 Verification LBA range: start 0x0 length 0xbd0b 00:06:39.232 Nvme0n1 : 5.57 139.18 8.70 0.00 0.00 871769.43 26819.35 935652.43 00:06:39.232 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:39.232 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:39.232 Nvme0n1 : 5.83 76.81 4.80 0.00 0.00 1575804.40 12855.14 1606741.07 00:06:39.232 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:39.232 Verification LBA range: start 0x0 length 0xa000 00:06:39.232 Nvme1n1 : 5.66 147.08 9.19 0.00 0.00 814029.59 85499.27 858219.13 00:06:39.232 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:39.232 Verification LBA range: start 0xa000 length 0xa000 00:06:39.232 Nvme1n1 : 5.88 82.59 5.16 0.00 0.00 1407510.39 16837.71 1690627.15 00:06:39.232 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:39.232 Verification LBA range: start 0x0 length 0x8000 00:06:39.232 Nvme2n1 : 5.76 150.69 9.42 0.00 0.00 774633.43 52025.50 832408.02 00:06:39.232 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:39.232 Verification LBA range: start 0x8000 length 0x8000 00:06:39.232 Nvme2n1 : 5.89 86.98 5.44 0.00 0.00 1270765.49 33675.42 1432516.14 00:06:39.232 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:39.232 Verification LBA range: start 0x0 length 0x8000 00:06:39.232 Nvme2n2 : 5.80 155.27 9.70 0.00 0.00 734278.71 49404.06 845313.58 00:06:39.232 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:39.232 Verification LBA range: start 0x8000 length 0x8000 00:06:39.232 Nvme2n2 : 6.00 110.13 6.88 0.00 0.00 966871.53 23189.66 1458327.24 00:06:39.232 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:39.232 Verification LBA range: start 0x0 length 0x8000 00:06:39.232 Nvme2n3 : 5.80 157.98 9.87 0.00 0.00 704485.01 40329.85 877577.45 00:06:39.232 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:39.232 Verification LBA range: start 0x8000 length 0x8000 00:06:39.232 Nvme2n3 : 6.11 148.99 9.31 0.00 0.00 686463.20 10384.94 1484138.34 00:06:39.232 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:39.233 Verification LBA range: start 0x0 length 0x2000 00:06:39.233 Nvme3n1 : 5.84 174.98 10.94 0.00 0.00 622651.44 831.80 967916.31 00:06:39.233 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:39.233 Verification LBA range: start 0x2000 length 0x2000 00:06:39.233 Nvme3n1 : 6.35 305.66 19.10 0.00 0.00 321396.85 485.22 1529307.77 00:06:39.233 [2024-11-29T09:24:06.959Z] =================================================================================================================== 00:06:39.233 [2024-11-29T09:24:06.959Z] Total : 1736.35 108.52 0.00 0.00 768723.56 485.22 1690627.15 00:06:40.171 00:06:40.171 real 0m8.209s 00:06:40.171 user 0m15.507s 00:06:40.171 sys 0m0.342s 00:06:40.171 ************************************ 00:06:40.171 END TEST bdev_verify_big_io 00:06:40.171 ************************************ 00:06:40.171 09:24:07 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.171 09:24:07 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:40.171 09:24:07 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:40.171 09:24:07 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:40.171 09:24:07 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.171 09:24:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:40.171 ************************************ 00:06:40.171 START TEST bdev_write_zeroes 00:06:40.171 ************************************ 00:06:40.171 09:24:07 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:40.171 [2024-11-29 09:24:07.801780] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:40.171 [2024-11-29 09:24:07.801878] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74041 ] 00:06:40.463 [2024-11-29 09:24:07.928395] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:40.463 [2024-11-29 09:24:07.948812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.463 [2024-11-29 09:24:07.974669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.721 Running I/O for 1 seconds... 00:06:42.095 62592.00 IOPS, 244.50 MiB/s 00:06:42.095 Latency(us) 00:06:42.095 [2024-11-29T09:24:09.821Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:42.095 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:42.095 Nvme0n1 : 1.02 10421.12 40.71 0.00 0.00 12258.62 4889.99 25105.33 00:06:42.095 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:42.095 Nvme1n1 : 1.02 10409.29 40.66 0.00 0.00 12259.04 8570.09 22685.54 00:06:42.095 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:42.095 Nvme2n1 : 1.02 10397.48 40.62 0.00 0.00 12234.32 8418.86 22786.36 00:06:42.096 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:42.096 Nvme2n2 : 1.02 10385.68 40.57 0.00 0.00 12232.54 8368.44 21878.94 00:06:42.096 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:42.096 Nvme2n3 : 1.02 10373.96 40.52 0.00 0.00 12231.53 8519.68 21576.47 00:06:42.096 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:42.096 Nvme3n1 : 1.03 10362.11 40.48 0.00 0.00 12202.52 8418.86 21576.47 00:06:42.096 [2024-11-29T09:24:09.822Z] =================================================================================================================== 00:06:42.096 [2024-11-29T09:24:09.822Z] Total : 62349.63 243.55 0.00 0.00 12236.43 4889.99 25105.33 00:06:42.096 00:06:42.096 real 0m1.907s 00:06:42.096 user 0m1.609s 00:06:42.096 sys 0m0.187s 00:06:42.096 09:24:09 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.096 ************************************ 00:06:42.096 END TEST bdev_write_zeroes 00:06:42.096 ************************************ 00:06:42.096 09:24:09 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:42.096 09:24:09 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:42.096 09:24:09 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:42.096 09:24:09 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.096 09:24:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:42.096 ************************************ 00:06:42.096 START TEST bdev_json_nonenclosed 00:06:42.096 ************************************ 00:06:42.096 09:24:09 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:42.096 [2024-11-29 09:24:09.790778] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:42.096 [2024-11-29 09:24:09.790921] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74083 ] 00:06:42.356 [2024-11-29 09:24:09.928895] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:42.356 [2024-11-29 09:24:09.960487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.356 [2024-11-29 09:24:09.999507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.356 [2024-11-29 09:24:09.999650] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:42.356 [2024-11-29 09:24:09.999673] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:42.356 [2024-11-29 09:24:09.999686] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:42.616 00:06:42.616 real 0m0.387s 00:06:42.616 user 0m0.156s 00:06:42.616 sys 0m0.126s 00:06:42.616 09:24:10 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.616 ************************************ 00:06:42.616 END TEST bdev_json_nonenclosed 00:06:42.616 ************************************ 00:06:42.616 09:24:10 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:42.616 09:24:10 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:42.616 09:24:10 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:42.616 09:24:10 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.616 09:24:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:42.616 ************************************ 00:06:42.616 START TEST bdev_json_nonarray 00:06:42.616 ************************************ 00:06:42.616 09:24:10 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:42.616 [2024-11-29 09:24:10.252291] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:42.616 [2024-11-29 09:24:10.252441] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74103 ] 00:06:42.877 [2024-11-29 09:24:10.389885] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:42.877 [2024-11-29 09:24:10.417736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.877 [2024-11-29 09:24:10.456247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.877 [2024-11-29 09:24:10.456381] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:42.877 [2024-11-29 09:24:10.456402] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:42.877 [2024-11-29 09:24:10.456415] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:42.877 00:06:42.877 real 0m0.365s 00:06:42.877 user 0m0.141s 00:06:42.877 sys 0m0.120s 00:06:42.877 09:24:10 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.877 09:24:10 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:42.877 ************************************ 00:06:42.877 END TEST bdev_json_nonarray 00:06:42.877 ************************************ 00:06:42.877 09:24:10 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:06:42.877 09:24:10 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:06:42.877 09:24:10 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:06:42.877 09:24:10 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:06:42.878 09:24:10 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:06:42.878 09:24:10 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:42.878 09:24:10 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:42.878 09:24:10 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:06:42.878 09:24:10 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:06:42.878 09:24:10 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:06:42.878 09:24:10 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:06:42.878 00:06:42.878 real 0m30.619s 00:06:42.878 user 0m48.382s 00:06:42.878 sys 0m5.211s 00:06:42.878 ************************************ 00:06:42.878 END TEST blockdev_nvme 00:06:42.878 ************************************ 00:06:42.878 09:24:10 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.878 09:24:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:43.137 09:24:10 -- spdk/autotest.sh@209 -- # uname -s 00:06:43.137 09:24:10 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:06:43.137 09:24:10 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:43.137 09:24:10 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:43.137 09:24:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.137 09:24:10 -- common/autotest_common.sh@10 -- # set +x 00:06:43.137 ************************************ 00:06:43.137 START TEST blockdev_nvme_gpt 00:06:43.137 ************************************ 00:06:43.137 09:24:10 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:43.137 * Looking for test storage... 00:06:43.137 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:43.137 09:24:10 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:43.137 09:24:10 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:43.137 09:24:10 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:06:43.137 09:24:10 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:43.137 09:24:10 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:43.137 09:24:10 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:43.137 09:24:10 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:43.137 09:24:10 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:06:43.137 09:24:10 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:06:43.137 09:24:10 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:06:43.137 09:24:10 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:06:43.137 09:24:10 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:06:43.137 09:24:10 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:06:43.137 09:24:10 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:06:43.137 09:24:10 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:43.137 09:24:10 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:06:43.137 09:24:10 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:06:43.137 09:24:10 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:43.137 09:24:10 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:43.137 09:24:10 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:06:43.137 09:24:10 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:06:43.137 09:24:10 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:43.137 09:24:10 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:06:43.137 09:24:10 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:06:43.137 09:24:10 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:06:43.137 09:24:10 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:06:43.137 09:24:10 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:43.137 09:24:10 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:06:43.137 09:24:10 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:06:43.137 09:24:10 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:43.137 09:24:10 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:43.137 09:24:10 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:06:43.137 09:24:10 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:43.137 09:24:10 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:43.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.137 --rc genhtml_branch_coverage=1 00:06:43.137 --rc genhtml_function_coverage=1 00:06:43.137 --rc genhtml_legend=1 00:06:43.137 --rc geninfo_all_blocks=1 00:06:43.137 --rc geninfo_unexecuted_blocks=1 00:06:43.137 00:06:43.137 ' 00:06:43.137 09:24:10 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:43.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.137 --rc genhtml_branch_coverage=1 00:06:43.137 --rc genhtml_function_coverage=1 00:06:43.137 --rc genhtml_legend=1 00:06:43.137 --rc geninfo_all_blocks=1 00:06:43.137 --rc geninfo_unexecuted_blocks=1 00:06:43.137 00:06:43.137 ' 00:06:43.137 09:24:10 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:43.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.137 --rc genhtml_branch_coverage=1 00:06:43.137 --rc genhtml_function_coverage=1 00:06:43.137 --rc genhtml_legend=1 00:06:43.137 --rc geninfo_all_blocks=1 00:06:43.137 --rc geninfo_unexecuted_blocks=1 00:06:43.137 00:06:43.137 ' 00:06:43.137 09:24:10 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:43.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.138 --rc genhtml_branch_coverage=1 00:06:43.138 --rc genhtml_function_coverage=1 00:06:43.138 --rc genhtml_legend=1 00:06:43.138 --rc geninfo_all_blocks=1 00:06:43.138 --rc geninfo_unexecuted_blocks=1 00:06:43.138 00:06:43.138 ' 00:06:43.138 09:24:10 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:43.138 09:24:10 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:06:43.138 09:24:10 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:43.138 09:24:10 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:43.138 09:24:10 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:43.138 09:24:10 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:43.138 09:24:10 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:43.138 09:24:10 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:43.138 09:24:10 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:06:43.138 09:24:10 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:06:43.138 09:24:10 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:06:43.138 09:24:10 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:06:43.138 09:24:10 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:06:43.138 09:24:10 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:06:43.138 09:24:10 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:06:43.138 09:24:10 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:06:43.138 09:24:10 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:06:43.138 09:24:10 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:06:43.138 09:24:10 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:06:43.138 09:24:10 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:06:43.138 09:24:10 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:06:43.138 09:24:10 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:06:43.138 09:24:10 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:06:43.138 09:24:10 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:06:43.138 09:24:10 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=74187 00:06:43.138 09:24:10 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:43.138 09:24:10 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 74187 00:06:43.138 09:24:10 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 74187 ']' 00:06:43.138 09:24:10 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.138 09:24:10 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.138 09:24:10 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.138 09:24:10 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.138 09:24:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:43.138 09:24:10 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:43.138 [2024-11-29 09:24:10.849874] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:43.138 [2024-11-29 09:24:10.850018] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74187 ] 00:06:43.397 [2024-11-29 09:24:10.983983] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:43.397 [2024-11-29 09:24:11.014266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.397 [2024-11-29 09:24:11.039112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.963 09:24:11 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:43.963 09:24:11 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:06:43.963 09:24:11 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:06:43.963 09:24:11 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:06:43.963 09:24:11 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:44.531 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:44.531 Waiting for block devices as requested 00:06:44.531 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:44.531 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:44.788 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:44.788 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:50.110 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:50.110 09:24:17 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:06:50.110 09:24:17 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:50.110 09:24:17 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:50.110 09:24:17 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:06:50.110 09:24:17 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:50.110 09:24:17 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:06:50.110 09:24:17 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:50.110 09:24:17 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:50.110 09:24:17 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:50.110 09:24:17 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:50.110 09:24:17 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:06:50.110 09:24:17 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:50.110 09:24:17 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:50.110 09:24:17 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:50.110 09:24:17 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:50.110 09:24:17 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:06:50.110 09:24:17 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:06:50.110 09:24:17 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:50.110 09:24:17 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:50.110 09:24:17 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:50.110 09:24:17 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:06:50.110 09:24:17 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:06:50.110 09:24:17 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:50.110 09:24:17 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:50.110 09:24:17 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:50.110 09:24:17 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:06:50.110 09:24:17 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:06:50.110 09:24:17 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:50.110 09:24:17 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:50.110 09:24:17 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:50.110 09:24:17 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:06:50.110 09:24:17 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:06:50.110 09:24:17 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:50.110 09:24:17 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:50.110 09:24:17 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:50.110 09:24:17 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:06:50.110 09:24:17 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:06:50.110 09:24:17 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:06:50.111 09:24:17 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:50.111 09:24:17 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:06:50.111 09:24:17 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:06:50.111 09:24:17 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:06:50.111 09:24:17 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:06:50.111 09:24:17 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:06:50.111 09:24:17 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:06:50.111 09:24:17 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:06:50.111 09:24:17 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:06:50.111 BYT; 00:06:50.111 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:06:50.111 09:24:17 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:06:50.111 BYT; 00:06:50.111 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:06:50.111 09:24:17 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:06:50.111 09:24:17 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:06:50.111 09:24:17 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:06:50.111 09:24:17 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:06:50.111 09:24:17 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:06:50.111 09:24:17 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:06:50.111 09:24:17 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:06:50.111 09:24:17 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:06:50.111 09:24:17 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:50.111 09:24:17 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:50.111 09:24:17 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:06:50.111 09:24:17 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:06:50.111 09:24:17 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:50.111 09:24:17 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:06:50.111 09:24:17 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:50.111 09:24:17 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:50.111 09:24:17 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:50.111 09:24:17 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:06:50.111 09:24:17 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:06:50.111 09:24:17 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:50.111 09:24:17 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:50.111 09:24:17 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:06:50.111 09:24:17 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:06:50.111 09:24:17 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:50.111 09:24:17 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:06:50.111 09:24:17 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:50.111 09:24:17 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:50.111 09:24:17 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:50.111 09:24:17 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:06:51.052 The operation has completed successfully. 00:06:51.052 09:24:18 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:06:51.995 The operation has completed successfully. 00:06:51.995 09:24:19 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:52.566 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:52.827 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:52.827 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:52.827 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:52.827 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:53.089 09:24:20 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:06:53.089 09:24:20 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.089 09:24:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:53.089 [] 00:06:53.089 09:24:20 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.089 09:24:20 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:06:53.089 09:24:20 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:06:53.089 09:24:20 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:53.089 09:24:20 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:53.089 09:24:20 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:53.089 09:24:20 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.089 09:24:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:53.351 09:24:20 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.351 09:24:20 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:06:53.351 09:24:20 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.351 09:24:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:53.351 09:24:20 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.351 09:24:20 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:06:53.351 09:24:20 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:06:53.351 09:24:20 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.351 09:24:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:53.351 09:24:20 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.351 09:24:20 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:06:53.351 09:24:20 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.351 09:24:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:53.351 09:24:20 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.351 09:24:20 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:53.351 09:24:20 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.351 09:24:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:53.351 09:24:20 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.351 09:24:20 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:06:53.351 09:24:20 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:06:53.351 09:24:20 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:06:53.351 09:24:20 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.351 09:24:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:53.351 09:24:21 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.351 09:24:21 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:06:53.351 09:24:21 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:06:53.352 09:24:21 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "2b1b3cd6-696d-4dc6-bcf4-e4022b43e6d9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "2b1b3cd6-696d-4dc6-bcf4-e4022b43e6d9",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "7f29272c-c46a-4992-92be-cd58b2e7fdfe"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7f29272c-c46a-4992-92be-cd58b2e7fdfe",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "dc12d603-5871-4cd9-88d8-f00628f59c5a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "dc12d603-5871-4cd9-88d8-f00628f59c5a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "2fdfb321-96dd-454b-9075-7bddccd28c15"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2fdfb321-96dd-454b-9075-7bddccd28c15",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "f3929970-cf02-42ea-a8bb-64e6763d7c4c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "f3929970-cf02-42ea-a8bb-64e6763d7c4c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:53.352 09:24:21 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:06:53.352 09:24:21 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:06:53.352 09:24:21 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:06:53.352 09:24:21 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 74187 00:06:53.352 09:24:21 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 74187 ']' 00:06:53.352 09:24:21 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 74187 00:06:53.352 09:24:21 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:06:53.352 09:24:21 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:53.352 09:24:21 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74187 00:06:53.614 09:24:21 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:53.614 09:24:21 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:53.614 killing process with pid 74187 00:06:53.614 09:24:21 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74187' 00:06:53.614 09:24:21 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 74187 00:06:53.614 09:24:21 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 74187 00:06:53.874 09:24:21 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:53.874 09:24:21 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:53.874 09:24:21 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:53.874 09:24:21 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.874 09:24:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:53.874 ************************************ 00:06:53.874 START TEST bdev_hello_world 00:06:53.874 ************************************ 00:06:53.874 09:24:21 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:53.874 [2024-11-29 09:24:21.460932] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:53.874 [2024-11-29 09:24:21.461070] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74796 ] 00:06:53.874 [2024-11-29 09:24:21.593824] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:54.135 [2024-11-29 09:24:21.617798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.135 [2024-11-29 09:24:21.642413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.396 [2024-11-29 09:24:22.022156] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:54.396 [2024-11-29 09:24:22.022198] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:54.396 [2024-11-29 09:24:22.022216] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:54.396 [2024-11-29 09:24:22.023968] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:54.396 [2024-11-29 09:24:22.024337] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:54.396 [2024-11-29 09:24:22.024358] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:54.396 [2024-11-29 09:24:22.024610] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:54.396 00:06:54.396 [2024-11-29 09:24:22.024629] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:54.657 00:06:54.657 real 0m0.792s 00:06:54.657 user 0m0.516s 00:06:54.657 sys 0m0.172s 00:06:54.657 09:24:22 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.657 09:24:22 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:54.657 ************************************ 00:06:54.657 END TEST bdev_hello_world 00:06:54.657 ************************************ 00:06:54.657 09:24:22 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:06:54.657 09:24:22 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:54.657 09:24:22 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.657 09:24:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:54.657 ************************************ 00:06:54.657 START TEST bdev_bounds 00:06:54.657 ************************************ 00:06:54.657 09:24:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:54.657 09:24:22 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=74827 00:06:54.657 09:24:22 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:54.657 Process bdevio pid: 74827 00:06:54.657 09:24:22 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 74827' 00:06:54.657 09:24:22 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 74827 00:06:54.657 09:24:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 74827 ']' 00:06:54.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.657 09:24:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.657 09:24:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.657 09:24:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.657 09:24:22 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:54.657 09:24:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.657 09:24:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:54.657 [2024-11-29 09:24:22.318387] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:54.657 [2024-11-29 09:24:22.318500] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74827 ] 00:06:54.918 [2024-11-29 09:24:22.451310] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:54.918 [2024-11-29 09:24:22.468942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:54.918 [2024-11-29 09:24:22.494502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.918 [2024-11-29 09:24:22.494753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:54.918 [2024-11-29 09:24:22.494879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.489 09:24:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.489 09:24:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:55.489 09:24:23 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:55.489 I/O targets: 00:06:55.489 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:55.489 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:06:55.489 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:06:55.489 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:55.489 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:55.489 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:55.489 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:55.490 00:06:55.490 00:06:55.490 CUnit - A unit testing framework for C - Version 2.1-3 00:06:55.490 http://cunit.sourceforge.net/ 00:06:55.490 00:06:55.490 00:06:55.490 Suite: bdevio tests on: Nvme3n1 00:06:55.490 Test: blockdev write read block ...passed 00:06:55.752 Test: blockdev write zeroes read block ...passed 00:06:55.752 Test: blockdev write zeroes read no split ...passed 00:06:55.752 Test: blockdev write zeroes read split ...passed 00:06:55.752 Test: blockdev write zeroes read split partial ...passed 00:06:55.752 Test: blockdev reset ...[2024-11-29 09:24:23.280043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:55.752 [2024-11-29 09:24:23.282988] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller spassed 00:06:55.752 Test: blockdev write read 8 blocks ...uccessful. 00:06:55.752 passed 00:06:55.752 Test: blockdev write read size > 128k ...passed 00:06:55.752 Test: blockdev write read invalid size ...passed 00:06:55.752 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:55.752 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:55.752 Test: blockdev write read max offset ...passed 00:06:55.752 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:55.752 Test: blockdev writev readv 8 blocks ...passed 00:06:55.752 Test: blockdev writev readv 30 x 1block ...passed 00:06:55.752 Test: blockdev writev readv block ...passed 00:06:55.752 Test: blockdev writev readv size > 128k ...passed 00:06:55.752 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:55.752 Test: blockdev comparev and writev ...[2024-11-29 09:24:23.298267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:06:55.752 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2c3a0e000 len:0x1000 00:06:55.752 [2024-11-29 09:24:23.298404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:55.752 passed 00:06:55.752 Test: blockdev nvme passthru vendor specific ...passed 00:06:55.752 Test: blockdev nvme admin passthru ...[2024-11-29 09:24:23.300287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:55.752 [2024-11-29 09:24:23.300321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:55.752 passed 00:06:55.752 Test: blockdev copy ...passed 00:06:55.752 Suite: bdevio tests on: Nvme2n3 00:06:55.752 Test: blockdev write read block ...passed 00:06:55.752 Test: blockdev write zeroes read block ...passed 00:06:55.752 Test: blockdev write zeroes read no split ...passed 00:06:55.752 Test: blockdev write zeroes read split ...passed 00:06:55.752 Test: blockdev write zeroes read split partial ...passed 00:06:55.752 Test: blockdev reset ...[2024-11-29 09:24:23.350986] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:55.752 passed 00:06:55.752 Test: blockdev write read 8 blocks ...[2024-11-29 09:24:23.353880] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:55.752 passed 00:06:55.752 Test: blockdev write read size > 128k ...passed 00:06:55.752 Test: blockdev write read invalid size ...passed 00:06:55.752 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:55.752 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:55.752 Test: blockdev write read max offset ...passed 00:06:55.752 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:55.752 Test: blockdev writev readv 8 blocks ...passed 00:06:55.752 Test: blockdev writev readv 30 x 1block ...passed 00:06:55.752 Test: blockdev writev readv block ...passed 00:06:55.752 Test: blockdev writev readv size > 128k ...passed 00:06:55.752 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:55.752 Test: blockdev comparev and writev ...[2024-11-29 09:24:23.368369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c3a06000 len:0x1000 00:06:55.752 [2024-11-29 09:24:23.368415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:55.752 passed 00:06:55.752 Test: blockdev nvme passthru rw ...passed 00:06:55.752 Test: blockdev nvme passthru vendor specific ...[2024-11-29 09:24:23.370194] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:55.752 [2024-11-29 09:24:23.370223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:55.752 passed 00:06:55.752 Test: blockdev nvme admin passthru ...passed 00:06:55.752 Test: blockdev copy ...passed 00:06:55.752 Suite: bdevio tests on: Nvme2n2 00:06:55.752 Test: blockdev write read block ...passed 00:06:55.752 Test: blockdev write zeroes read block ...passed 00:06:55.752 Test: blockdev write zeroes read no split ...passed 00:06:55.752 Test: blockdev write zeroes read split ...passed 00:06:55.752 Test: blockdev write zeroes read split partial ...passed 00:06:55.752 Test: blockdev reset ...[2024-11-29 09:24:23.406708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:55.752 [2024-11-29 09:24:23.408832] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:06:55.752 Test: blockdev write read 8 blocks ...uccessful. 00:06:55.752 passed 00:06:55.752 Test: blockdev write read size > 128k ...passed 00:06:55.752 Test: blockdev write read invalid size ...passed 00:06:55.752 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:55.752 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:55.752 Test: blockdev write read max offset ...passed 00:06:55.752 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:55.752 Test: blockdev writev readv 8 blocks ...passed 00:06:55.752 Test: blockdev writev readv 30 x 1block ...passed 00:06:55.752 Test: blockdev writev readv block ...passed 00:06:55.752 Test: blockdev writev readv size > 128k ...passed 00:06:55.752 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:55.752 Test: blockdev comparev and writev ...[2024-11-29 09:24:23.421303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c3a08000 len:0x1000 00:06:55.753 [2024-11-29 09:24:23.421344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:55.753 passed 00:06:55.753 Test: blockdev nvme passthru rw ...passed 00:06:55.753 Test: blockdev nvme passthru vendor specific ...[2024-11-29 09:24:23.423667] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:06:55.753 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:06:55.753 [2024-11-29 09:24:23.423779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:55.753 passed 00:06:55.753 Test: blockdev copy ...passed 00:06:55.753 Suite: bdevio tests on: Nvme2n1 00:06:55.753 Test: blockdev write read block ...passed 00:06:55.753 Test: blockdev write zeroes read block ...passed 00:06:55.753 Test: blockdev write zeroes read no split ...passed 00:06:55.753 Test: blockdev write zeroes read split ...passed 00:06:55.753 Test: blockdev write zeroes read split partial ...passed 00:06:55.753 Test: blockdev reset ...[2024-11-29 09:24:23.461697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:55.753 [2024-11-29 09:24:23.463963] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:06:55.753 Test: blockdev write read 8 blocks ...uccessful. 00:06:55.753 passed 00:06:55.753 Test: blockdev write read size > 128k ...passed 00:06:55.753 Test: blockdev write read invalid size ...passed 00:06:55.753 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:55.753 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:55.753 Test: blockdev write read max offset ...passed 00:06:55.753 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:55.753 Test: blockdev writev readv 8 blocks ...passed 00:06:55.753 Test: blockdev writev readv 30 x 1block ...passed 00:06:55.753 Test: blockdev writev readv block ...passed 00:06:55.753 Test: blockdev writev readv size > 128k ...passed 00:06:56.014 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:56.014 Test: blockdev comparev and writev ...[2024-11-29 09:24:23.478578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e7a3d000 len:0x1000 00:06:56.014 [2024-11-29 09:24:23.478623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:56.014 passed 00:06:56.014 Test: blockdev nvme passthru rw ...passed 00:06:56.014 Test: blockdev nvme passthru vendor specific ...passed 00:06:56.014 Test: blockdev nvme admin passthru ...[2024-11-29 09:24:23.480934] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:56.014 [2024-11-29 09:24:23.480966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:56.014 passed 00:06:56.014 Test: blockdev copy ...passed 00:06:56.015 Suite: bdevio tests on: Nvme1n1p2 00:06:56.015 Test: blockdev write read block ...passed 00:06:56.015 Test: blockdev write zeroes read block ...passed 00:06:56.015 Test: blockdev write zeroes read no split ...passed 00:06:56.015 Test: blockdev write zeroes read split ...passed 00:06:56.015 Test: blockdev write zeroes read split partial ...passed 00:06:56.015 Test: blockdev reset ...[2024-11-29 09:24:23.517387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:56.015 [2024-11-29 09:24:23.519406] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:06:56.015 Test: blockdev write read 8 blocks ...uccessful. 00:06:56.015 passed 00:06:56.015 Test: blockdev write read size > 128k ...passed 00:06:56.015 Test: blockdev write read invalid size ...passed 00:06:56.015 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:56.015 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:56.015 Test: blockdev write read max offset ...passed 00:06:56.015 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:56.015 Test: blockdev writev readv 8 blocks ...passed 00:06:56.015 Test: blockdev writev readv 30 x 1block ...passed 00:06:56.015 Test: blockdev writev readv block ...passed 00:06:56.015 Test: blockdev writev readv size > 128k ...passed 00:06:56.015 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:56.015 Test: blockdev comparev and writev ...[2024-11-29 09:24:23.534729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2e7a39000 len:0x1000 00:06:56.015 [2024-11-29 09:24:23.534764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:56.015 passed 00:06:56.015 Test: blockdev nvme passthru rw ...passed 00:06:56.015 Test: blockdev nvme passthru vendor specific ...passed 00:06:56.015 Test: blockdev nvme admin passthru ...passed 00:06:56.015 Test: blockdev copy ...passed 00:06:56.015 Suite: bdevio tests on: Nvme1n1p1 00:06:56.015 Test: blockdev write read block ...passed 00:06:56.015 Test: blockdev write zeroes read block ...passed 00:06:56.015 Test: blockdev write zeroes read no split ...passed 00:06:56.015 Test: blockdev write zeroes read split ...passed 00:06:56.015 Test: blockdev write zeroes read split partial ...passed 00:06:56.015 Test: blockdev reset ...[2024-11-29 09:24:23.576125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:56.015 passed 00:06:56.015 Test: blockdev write read 8 blocks ...[2024-11-29 09:24:23.580620] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:56.015 passed 00:06:56.015 Test: blockdev write read size > 128k ...passed 00:06:56.015 Test: blockdev write read invalid size ...passed 00:06:56.015 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:56.015 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:56.015 Test: blockdev write read max offset ...passed 00:06:56.015 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:56.015 Test: blockdev writev readv 8 blocks ...passed 00:06:56.015 Test: blockdev writev readv 30 x 1block ...passed 00:06:56.015 Test: blockdev writev readv block ...passed 00:06:56.015 Test: blockdev writev readv size > 128k ...passed 00:06:56.015 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:56.015 Test: blockdev comparev and writev ...[2024-11-29 09:24:23.595496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2e7a35000 len:0x1000 00:06:56.015 [2024-11-29 09:24:23.595532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:56.015 passed 00:06:56.015 Test: blockdev nvme passthru rw ...passed 00:06:56.015 Test: blockdev nvme passthru vendor specific ...passed 00:06:56.015 Test: blockdev nvme admin passthru ...passed 00:06:56.015 Test: blockdev copy ...passed 00:06:56.015 Suite: bdevio tests on: Nvme0n1 00:06:56.015 Test: blockdev write read block ...passed 00:06:56.015 Test: blockdev write zeroes read block ...passed 00:06:56.015 Test: blockdev write zeroes read no split ...passed 00:06:56.015 Test: blockdev write zeroes read split ...passed 00:06:56.015 Test: blockdev write zeroes read split partial ...passed 00:06:56.015 Test: blockdev reset ...[2024-11-29 09:24:23.660858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:56.015 passed 00:06:56.015 Test: blockdev write read 8 blocks ...[2024-11-29 09:24:23.662748] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:56.015 passed 00:06:56.015 Test: blockdev write read size > 128k ...passed 00:06:56.015 Test: blockdev write read invalid size ...passed 00:06:56.015 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:56.015 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:56.015 Test: blockdev write read max offset ...passed 00:06:56.015 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:56.015 Test: blockdev writev readv 8 blocks ...passed 00:06:56.015 Test: blockdev writev readv 30 x 1block ...passed 00:06:56.015 Test: blockdev writev readv block ...passed 00:06:56.015 Test: blockdev writev readv size > 128k ...passed 00:06:56.015 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:56.015 Test: blockdev comparev and writev ...passed 00:06:56.015 Test: blockdev nvme passthru rw ...[2024-11-29 09:24:23.675448] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:56.015 separate metadata which is not supported yet. 00:06:56.015 passed 00:06:56.015 Test: blockdev nvme passthru vendor specific ...passed 00:06:56.015 Test: blockdev nvme admin passthru ...[2024-11-29 09:24:23.677250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:56.015 [2024-11-29 09:24:23.677371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:56.015 passed 00:06:56.015 Test: blockdev copy ...passed 00:06:56.015 00:06:56.015 Run Summary: Type Total Ran Passed Failed Inactive 00:06:56.015 suites 7 7 n/a 0 0 00:06:56.015 tests 161 161 161 0 0 00:06:56.015 asserts 1025 1025 1025 0 n/a 00:06:56.015 00:06:56.015 Elapsed time = 1.017 seconds 00:06:56.015 0 00:06:56.015 09:24:23 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 74827 00:06:56.015 09:24:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 74827 ']' 00:06:56.015 09:24:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 74827 00:06:56.015 09:24:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:56.015 09:24:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:56.015 09:24:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74827 00:06:56.015 09:24:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:56.015 09:24:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:56.015 09:24:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74827' 00:06:56.015 killing process with pid 74827 00:06:56.015 09:24:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 74827 00:06:56.015 09:24:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 74827 00:06:56.587 09:24:24 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:56.587 00:06:56.587 real 0m1.812s 00:06:56.587 user 0m4.433s 00:06:56.587 sys 0m0.304s 00:06:56.587 09:24:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.587 09:24:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:56.587 ************************************ 00:06:56.587 END TEST bdev_bounds 00:06:56.587 ************************************ 00:06:56.587 09:24:24 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:56.587 09:24:24 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:56.587 09:24:24 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.587 09:24:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:56.587 ************************************ 00:06:56.587 START TEST bdev_nbd 00:06:56.587 ************************************ 00:06:56.587 09:24:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:56.587 09:24:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:56.587 09:24:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:56.587 09:24:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.587 09:24:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:56.587 09:24:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:56.587 09:24:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:56.587 09:24:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:06:56.587 09:24:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:56.587 09:24:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:56.587 09:24:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:56.587 09:24:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:06:56.587 09:24:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:56.587 09:24:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:56.587 09:24:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:56.587 09:24:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:56.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:56.587 09:24:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=74880 00:06:56.587 09:24:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:56.587 09:24:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 74880 /var/tmp/spdk-nbd.sock 00:06:56.587 09:24:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 74880 ']' 00:06:56.587 09:24:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:56.587 09:24:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:56.587 09:24:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:56.587 09:24:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:56.587 09:24:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:56.587 09:24:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:56.587 [2024-11-29 09:24:24.179884] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:56.587 [2024-11-29 09:24:24.180174] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:56.847 [2024-11-29 09:24:24.313332] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:56.847 [2024-11-29 09:24:24.342614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.847 [2024-11-29 09:24:24.367195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.416 09:24:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:57.416 09:24:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:57.416 09:24:25 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:57.416 09:24:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.416 09:24:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:57.416 09:24:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:57.416 09:24:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:57.416 09:24:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.416 09:24:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:57.416 09:24:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:57.416 09:24:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:57.416 09:24:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:57.416 09:24:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:57.416 09:24:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:57.416 09:24:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:57.676 09:24:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:57.676 09:24:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:57.676 09:24:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:57.676 09:24:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:57.676 09:24:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:57.676 09:24:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:57.676 09:24:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:57.676 09:24:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:57.676 09:24:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:57.677 09:24:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:57.677 09:24:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:57.677 09:24:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:57.677 1+0 records in 00:06:57.677 1+0 records out 00:06:57.677 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000771209 s, 5.3 MB/s 00:06:57.677 09:24:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:57.677 09:24:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:57.677 09:24:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:57.677 09:24:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:57.677 09:24:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:57.677 09:24:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:57.677 09:24:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:57.677 09:24:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:06:57.937 09:24:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:57.937 09:24:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:57.937 09:24:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:57.937 09:24:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:57.937 09:24:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:57.937 09:24:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:57.937 09:24:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:57.937 09:24:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:57.937 09:24:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:57.937 09:24:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:57.937 09:24:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:57.937 09:24:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:57.937 1+0 records in 00:06:57.937 1+0 records out 00:06:57.937 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00066503 s, 6.2 MB/s 00:06:57.937 09:24:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:57.937 09:24:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:57.937 09:24:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:57.937 09:24:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:57.937 09:24:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:57.937 09:24:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:57.937 09:24:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:57.937 09:24:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:06:58.198 09:24:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:58.198 09:24:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:58.198 09:24:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:58.198 09:24:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:58.198 09:24:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:58.198 09:24:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:58.198 09:24:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:58.198 09:24:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:58.198 09:24:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:58.198 09:24:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:58.198 09:24:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:58.198 09:24:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:58.198 1+0 records in 00:06:58.198 1+0 records out 00:06:58.198 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000667883 s, 6.1 MB/s 00:06:58.198 09:24:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.198 09:24:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:58.198 09:24:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.198 09:24:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:58.198 09:24:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:58.198 09:24:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:58.198 09:24:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:58.198 09:24:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:58.506 09:24:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:58.507 09:24:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:58.507 09:24:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:58.507 09:24:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:58.507 09:24:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:58.507 09:24:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:58.507 09:24:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:58.507 09:24:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:58.507 09:24:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:58.507 09:24:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:58.507 09:24:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:58.507 09:24:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:58.507 1+0 records in 00:06:58.507 1+0 records out 00:06:58.507 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000863027 s, 4.7 MB/s 00:06:58.507 09:24:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.507 09:24:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:58.507 09:24:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.507 09:24:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:58.507 09:24:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:58.507 09:24:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:58.507 09:24:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:58.507 09:24:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:58.776 09:24:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:58.776 09:24:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:58.776 09:24:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:58.776 09:24:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:58.776 09:24:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:58.776 09:24:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:58.776 09:24:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:58.776 09:24:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:58.776 09:24:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:58.776 09:24:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:58.776 09:24:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:58.776 09:24:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:58.776 1+0 records in 00:06:58.776 1+0 records out 00:06:58.776 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000468545 s, 8.7 MB/s 00:06:58.776 09:24:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.776 09:24:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:58.776 09:24:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.776 09:24:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:58.776 09:24:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:58.776 09:24:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:58.776 09:24:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:58.776 09:24:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:58.776 09:24:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:58.776 09:24:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:58.776 09:24:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:58.776 09:24:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:58.776 09:24:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:58.776 09:24:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:58.776 09:24:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:58.776 09:24:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:58.776 09:24:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:58.776 09:24:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:58.776 09:24:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:58.776 09:24:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:58.776 1+0 records in 00:06:58.776 1+0 records out 00:06:58.776 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000844012 s, 4.9 MB/s 00:06:58.776 09:24:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.776 09:24:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:58.776 09:24:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.776 09:24:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:58.776 09:24:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:58.776 09:24:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:58.776 09:24:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:58.776 09:24:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:59.038 09:24:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:06:59.038 09:24:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:06:59.038 09:24:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:06:59.038 09:24:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:06:59.038 09:24:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:59.038 09:24:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:59.038 09:24:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:59.038 09:24:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:06:59.038 09:24:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:59.038 09:24:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:59.038 09:24:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:59.038 09:24:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:59.038 1+0 records in 00:06:59.038 1+0 records out 00:06:59.038 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106733 s, 3.8 MB/s 00:06:59.038 09:24:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.038 09:24:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:59.038 09:24:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.038 09:24:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:59.038 09:24:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:59.038 09:24:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:59.038 09:24:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:59.038 09:24:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:59.300 09:24:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:59.300 { 00:06:59.300 "nbd_device": "/dev/nbd0", 00:06:59.300 "bdev_name": "Nvme0n1" 00:06:59.300 }, 00:06:59.300 { 00:06:59.300 "nbd_device": "/dev/nbd1", 00:06:59.300 "bdev_name": "Nvme1n1p1" 00:06:59.300 }, 00:06:59.300 { 00:06:59.300 "nbd_device": "/dev/nbd2", 00:06:59.300 "bdev_name": "Nvme1n1p2" 00:06:59.300 }, 00:06:59.300 { 00:06:59.300 "nbd_device": "/dev/nbd3", 00:06:59.300 "bdev_name": "Nvme2n1" 00:06:59.300 }, 00:06:59.300 { 00:06:59.300 "nbd_device": "/dev/nbd4", 00:06:59.300 "bdev_name": "Nvme2n2" 00:06:59.300 }, 00:06:59.300 { 00:06:59.300 "nbd_device": "/dev/nbd5", 00:06:59.300 "bdev_name": "Nvme2n3" 00:06:59.300 }, 00:06:59.300 { 00:06:59.300 "nbd_device": "/dev/nbd6", 00:06:59.300 "bdev_name": "Nvme3n1" 00:06:59.300 } 00:06:59.300 ]' 00:06:59.300 09:24:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:59.300 09:24:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:59.300 09:24:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:59.300 { 00:06:59.300 "nbd_device": "/dev/nbd0", 00:06:59.300 "bdev_name": "Nvme0n1" 00:06:59.300 }, 00:06:59.300 { 00:06:59.301 "nbd_device": "/dev/nbd1", 00:06:59.301 "bdev_name": "Nvme1n1p1" 00:06:59.301 }, 00:06:59.301 { 00:06:59.301 "nbd_device": "/dev/nbd2", 00:06:59.301 "bdev_name": "Nvme1n1p2" 00:06:59.301 }, 00:06:59.301 { 00:06:59.301 "nbd_device": "/dev/nbd3", 00:06:59.301 "bdev_name": "Nvme2n1" 00:06:59.301 }, 00:06:59.301 { 00:06:59.301 "nbd_device": "/dev/nbd4", 00:06:59.301 "bdev_name": "Nvme2n2" 00:06:59.301 }, 00:06:59.301 { 00:06:59.301 "nbd_device": "/dev/nbd5", 00:06:59.301 "bdev_name": "Nvme2n3" 00:06:59.301 }, 00:06:59.301 { 00:06:59.301 "nbd_device": "/dev/nbd6", 00:06:59.301 "bdev_name": "Nvme3n1" 00:06:59.301 } 00:06:59.301 ]' 00:06:59.301 09:24:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:06:59.301 09:24:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.301 09:24:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:06:59.301 09:24:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:59.301 09:24:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:59.301 09:24:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:59.301 09:24:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:59.562 09:24:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:59.562 09:24:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:59.562 09:24:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:59.562 09:24:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:59.562 09:24:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:59.562 09:24:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:59.562 09:24:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:59.562 09:24:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:59.562 09:24:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:59.562 09:24:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:59.823 09:24:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:59.823 09:24:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:59.823 09:24:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:59.823 09:24:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:59.823 09:24:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:59.823 09:24:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:59.823 09:24:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:59.823 09:24:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:59.823 09:24:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:59.823 09:24:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:59.823 09:24:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:59.823 09:24:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:59.823 09:24:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:59.823 09:24:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:59.823 09:24:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:59.823 09:24:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:59.823 09:24:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:59.823 09:24:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:59.823 09:24:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:59.823 09:24:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:00.084 09:24:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:00.084 09:24:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:00.085 09:24:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:00.085 09:24:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:00.085 09:24:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:00.085 09:24:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:00.085 09:24:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:00.085 09:24:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:00.085 09:24:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:00.085 09:24:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:00.345 09:24:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:00.345 09:24:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:00.345 09:24:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:00.345 09:24:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:00.345 09:24:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:00.345 09:24:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:00.345 09:24:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:00.345 09:24:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:00.345 09:24:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:00.345 09:24:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:00.605 09:24:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:00.605 09:24:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:00.605 09:24:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:00.605 09:24:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:00.605 09:24:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:00.605 09:24:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:00.605 09:24:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:00.605 09:24:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:00.605 09:24:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:00.605 09:24:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:00.866 09:24:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:00.866 09:24:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:00.866 09:24:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:00.866 09:24:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:00.866 09:24:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:00.866 09:24:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:00.866 09:24:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:00.866 09:24:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:00.866 09:24:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:00.866 09:24:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.866 09:24:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:00.866 09:24:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:00.866 09:24:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:00.866 09:24:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:01.128 09:24:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:01.128 09:24:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:01.128 09:24:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:01.128 09:24:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:01.128 09:24:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:01.128 09:24:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:01.128 09:24:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:01.128 09:24:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:01.128 09:24:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:01.128 09:24:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:01.128 09:24:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.128 09:24:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:01.128 09:24:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:01.128 09:24:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:01.128 09:24:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:01.128 09:24:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:01.128 09:24:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.128 09:24:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:01.128 09:24:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:01.128 09:24:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:01.128 09:24:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:01.129 09:24:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:01.129 09:24:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:01.129 09:24:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:01.129 09:24:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:01.129 /dev/nbd0 00:07:01.129 09:24:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:01.390 09:24:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:01.390 09:24:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:01.390 09:24:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:01.390 09:24:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:01.390 09:24:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:01.390 09:24:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:01.390 09:24:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:01.390 09:24:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:01.390 09:24:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:01.390 09:24:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:01.390 1+0 records in 00:07:01.390 1+0 records out 00:07:01.390 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00413707 s, 990 kB/s 00:07:01.390 09:24:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.390 09:24:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:01.390 09:24:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.390 09:24:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:01.390 09:24:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:01.390 09:24:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:01.390 09:24:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:01.390 09:24:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:01.390 /dev/nbd1 00:07:01.390 09:24:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:01.390 09:24:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:01.390 09:24:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:01.390 09:24:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:01.390 09:24:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:01.390 09:24:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:01.390 09:24:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:01.390 09:24:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:01.390 09:24:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:01.390 09:24:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:01.390 09:24:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:01.390 1+0 records in 00:07:01.390 1+0 records out 00:07:01.390 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00100546 s, 4.1 MB/s 00:07:01.390 09:24:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.651 09:24:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:01.651 09:24:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.651 09:24:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:01.651 09:24:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:01.651 09:24:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:01.651 09:24:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:01.651 09:24:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:01.651 /dev/nbd10 00:07:01.651 09:24:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:01.651 09:24:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:01.651 09:24:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:07:01.651 09:24:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:01.651 09:24:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:01.651 09:24:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:01.651 09:24:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:07:01.651 09:24:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:01.651 09:24:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:01.651 09:24:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:01.651 09:24:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:01.651 1+0 records in 00:07:01.651 1+0 records out 00:07:01.651 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00101342 s, 4.0 MB/s 00:07:01.651 09:24:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.651 09:24:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:01.651 09:24:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.651 09:24:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:01.651 09:24:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:01.651 09:24:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:01.651 09:24:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:01.651 09:24:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:01.911 /dev/nbd11 00:07:01.911 09:24:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:01.911 09:24:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:01.911 09:24:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:07:01.911 09:24:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:01.911 09:24:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:01.911 09:24:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:01.911 09:24:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:07:01.911 09:24:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:01.911 09:24:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:01.911 09:24:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:01.911 09:24:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:01.911 1+0 records in 00:07:01.911 1+0 records out 00:07:01.911 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00113256 s, 3.6 MB/s 00:07:01.911 09:24:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.911 09:24:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:01.911 09:24:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.911 09:24:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:01.911 09:24:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:01.911 09:24:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:01.911 09:24:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:01.911 09:24:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:02.171 /dev/nbd12 00:07:02.171 09:24:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:02.171 09:24:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:02.171 09:24:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:02.171 09:24:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:02.171 09:24:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:02.171 09:24:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:02.171 09:24:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:02.171 09:24:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:02.171 09:24:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:02.172 09:24:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:02.172 09:24:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:02.172 1+0 records in 00:07:02.172 1+0 records out 00:07:02.172 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00139094 s, 2.9 MB/s 00:07:02.172 09:24:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.172 09:24:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:02.172 09:24:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.172 09:24:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:02.172 09:24:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:02.172 09:24:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:02.172 09:24:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:02.172 09:24:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:02.431 /dev/nbd13 00:07:02.431 09:24:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:02.431 09:24:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:02.431 09:24:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:02.431 09:24:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:02.431 09:24:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:02.431 09:24:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:02.431 09:24:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:02.431 09:24:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:02.431 09:24:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:02.431 09:24:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:02.431 09:24:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:02.431 1+0 records in 00:07:02.431 1+0 records out 00:07:02.431 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000821801 s, 5.0 MB/s 00:07:02.431 09:24:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.431 09:24:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:02.431 09:24:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.431 09:24:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:02.432 09:24:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:02.432 09:24:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:02.432 09:24:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:02.432 09:24:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:02.691 /dev/nbd14 00:07:02.691 09:24:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:02.691 09:24:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:02.691 09:24:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:07:02.691 09:24:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:02.691 09:24:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:02.691 09:24:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:02.691 09:24:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:07:02.691 09:24:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:02.691 09:24:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:02.691 09:24:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:02.691 09:24:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:02.691 1+0 records in 00:07:02.692 1+0 records out 00:07:02.692 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108606 s, 3.8 MB/s 00:07:02.692 09:24:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.692 09:24:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:02.692 09:24:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.692 09:24:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:02.692 09:24:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:02.692 09:24:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:02.692 09:24:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:02.692 09:24:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:02.692 09:24:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.692 09:24:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:02.952 09:24:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:02.952 { 00:07:02.952 "nbd_device": "/dev/nbd0", 00:07:02.952 "bdev_name": "Nvme0n1" 00:07:02.952 }, 00:07:02.952 { 00:07:02.952 "nbd_device": "/dev/nbd1", 00:07:02.952 "bdev_name": "Nvme1n1p1" 00:07:02.952 }, 00:07:02.952 { 00:07:02.952 "nbd_device": "/dev/nbd10", 00:07:02.952 "bdev_name": "Nvme1n1p2" 00:07:02.952 }, 00:07:02.952 { 00:07:02.952 "nbd_device": "/dev/nbd11", 00:07:02.952 "bdev_name": "Nvme2n1" 00:07:02.952 }, 00:07:02.952 { 00:07:02.952 "nbd_device": "/dev/nbd12", 00:07:02.952 "bdev_name": "Nvme2n2" 00:07:02.952 }, 00:07:02.952 { 00:07:02.952 "nbd_device": "/dev/nbd13", 00:07:02.952 "bdev_name": "Nvme2n3" 00:07:02.952 }, 00:07:02.952 { 00:07:02.952 "nbd_device": "/dev/nbd14", 00:07:02.952 "bdev_name": "Nvme3n1" 00:07:02.952 } 00:07:02.952 ]' 00:07:02.952 09:24:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:02.952 { 00:07:02.952 "nbd_device": "/dev/nbd0", 00:07:02.952 "bdev_name": "Nvme0n1" 00:07:02.952 }, 00:07:02.952 { 00:07:02.952 "nbd_device": "/dev/nbd1", 00:07:02.952 "bdev_name": "Nvme1n1p1" 00:07:02.952 }, 00:07:02.952 { 00:07:02.952 "nbd_device": "/dev/nbd10", 00:07:02.952 "bdev_name": "Nvme1n1p2" 00:07:02.952 }, 00:07:02.952 { 00:07:02.952 "nbd_device": "/dev/nbd11", 00:07:02.952 "bdev_name": "Nvme2n1" 00:07:02.952 }, 00:07:02.952 { 00:07:02.952 "nbd_device": "/dev/nbd12", 00:07:02.952 "bdev_name": "Nvme2n2" 00:07:02.952 }, 00:07:02.952 { 00:07:02.952 "nbd_device": "/dev/nbd13", 00:07:02.952 "bdev_name": "Nvme2n3" 00:07:02.952 }, 00:07:02.952 { 00:07:02.952 "nbd_device": "/dev/nbd14", 00:07:02.952 "bdev_name": "Nvme3n1" 00:07:02.952 } 00:07:02.952 ]' 00:07:02.952 09:24:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:02.953 09:24:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:02.953 /dev/nbd1 00:07:02.953 /dev/nbd10 00:07:02.953 /dev/nbd11 00:07:02.953 /dev/nbd12 00:07:02.953 /dev/nbd13 00:07:02.953 /dev/nbd14' 00:07:02.953 09:24:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:02.953 /dev/nbd1 00:07:02.953 /dev/nbd10 00:07:02.953 /dev/nbd11 00:07:02.953 /dev/nbd12 00:07:02.953 /dev/nbd13 00:07:02.953 /dev/nbd14' 00:07:02.953 09:24:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:02.953 09:24:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:02.953 09:24:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:02.953 09:24:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:02.953 09:24:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:02.953 09:24:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:02.953 09:24:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:02.953 09:24:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:02.953 09:24:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:02.953 09:24:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:02.953 09:24:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:02.953 09:24:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:02.953 256+0 records in 00:07:02.953 256+0 records out 00:07:02.953 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0110351 s, 95.0 MB/s 00:07:02.953 09:24:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:02.953 09:24:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:03.215 256+0 records in 00:07:03.215 256+0 records out 00:07:03.215 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.267188 s, 3.9 MB/s 00:07:03.215 09:24:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:03.215 09:24:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:03.476 256+0 records in 00:07:03.476 256+0 records out 00:07:03.476 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.214783 s, 4.9 MB/s 00:07:03.476 09:24:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:03.476 09:24:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:03.735 256+0 records in 00:07:03.736 256+0 records out 00:07:03.736 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.228807 s, 4.6 MB/s 00:07:03.736 09:24:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:03.736 09:24:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:03.996 256+0 records in 00:07:03.996 256+0 records out 00:07:03.996 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.204096 s, 5.1 MB/s 00:07:03.996 09:24:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:03.996 09:24:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:04.256 256+0 records in 00:07:04.256 256+0 records out 00:07:04.256 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.200509 s, 5.2 MB/s 00:07:04.256 09:24:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:04.256 09:24:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:04.256 256+0 records in 00:07:04.256 256+0 records out 00:07:04.256 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.222358 s, 4.7 MB/s 00:07:04.256 09:24:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:04.256 09:24:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:04.515 256+0 records in 00:07:04.515 256+0 records out 00:07:04.515 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.150128 s, 7.0 MB/s 00:07:04.515 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:04.515 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:04.515 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:04.515 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:04.515 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:04.515 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:04.515 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:04.515 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:04.515 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:04.515 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:04.515 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:04.515 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:04.515 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:04.515 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:04.515 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:04.515 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:04.515 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:04.515 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:04.515 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:04.515 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:04.515 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:04.515 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:04.515 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:04.515 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.515 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:04.515 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:04.516 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:04.516 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.516 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:04.775 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:04.775 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:04.775 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:04.775 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.775 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.775 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:04.775 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:04.775 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.775 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.775 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:05.034 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:05.034 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:05.034 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:05.034 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.034 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.034 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:05.034 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:05.034 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.034 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.034 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:05.294 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:05.294 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:05.294 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:05.294 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.294 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.294 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:05.294 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:05.294 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.294 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.295 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:05.295 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:05.295 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:05.295 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:05.295 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.295 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.295 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:05.295 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:05.295 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.295 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.295 09:24:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:05.554 09:24:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:05.554 09:24:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:05.554 09:24:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:05.554 09:24:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.554 09:24:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.554 09:24:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:05.554 09:24:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:05.554 09:24:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.554 09:24:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.554 09:24:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:05.814 09:24:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:05.814 09:24:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:05.814 09:24:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:05.814 09:24:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.814 09:24:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.814 09:24:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:05.814 09:24:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:05.814 09:24:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.814 09:24:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.814 09:24:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:06.073 09:24:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:06.073 09:24:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:06.073 09:24:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:06.073 09:24:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.073 09:24:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.073 09:24:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:06.073 09:24:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:06.073 09:24:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.073 09:24:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:06.073 09:24:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.073 09:24:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:06.073 09:24:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:06.074 09:24:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:06.074 09:24:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:06.074 09:24:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:06.074 09:24:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:06.074 09:24:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:06.074 09:24:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:06.074 09:24:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:06.334 09:24:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:06.334 09:24:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:06.334 09:24:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:06.334 09:24:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:06.334 09:24:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:06.334 09:24:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.334 09:24:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:06.334 09:24:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:06.334 malloc_lvol_verify 00:07:06.334 09:24:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:06.594 e10a904b-6c63-4a06-994a-aae676e9efd8 00:07:06.594 09:24:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:06.855 090a009d-40dc-445c-9919-a244fd6cb115 00:07:06.855 09:24:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:07.115 /dev/nbd0 00:07:07.115 09:24:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:07.115 09:24:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:07.115 09:24:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:07.115 09:24:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:07.115 09:24:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:07.115 mke2fs 1.47.0 (5-Feb-2023) 00:07:07.115 Discarding device blocks: 0/4096 done 00:07:07.115 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:07.115 00:07:07.115 Allocating group tables: 0/1 done 00:07:07.115 Writing inode tables: 0/1 done 00:07:07.115 Creating journal (1024 blocks): done 00:07:07.115 Writing superblocks and filesystem accounting information: 0/1 done 00:07:07.115 00:07:07.115 09:24:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:07.115 09:24:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.115 09:24:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:07.115 09:24:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:07.115 09:24:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:07.115 09:24:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.115 09:24:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:07.115 09:24:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:07.115 09:24:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:07.115 09:24:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:07.115 09:24:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.115 09:24:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.115 09:24:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:07.375 09:24:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.375 09:24:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.375 09:24:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 74880 00:07:07.375 09:24:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 74880 ']' 00:07:07.375 09:24:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 74880 00:07:07.375 09:24:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:07.375 09:24:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:07.375 09:24:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74880 00:07:07.375 09:24:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:07.375 killing process with pid 74880 00:07:07.375 09:24:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:07.375 09:24:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74880' 00:07:07.375 09:24:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 74880 00:07:07.375 09:24:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 74880 00:07:07.375 09:24:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:07.375 00:07:07.375 real 0m10.963s 00:07:07.375 user 0m15.166s 00:07:07.375 sys 0m3.805s 00:07:07.375 ************************************ 00:07:07.375 END TEST bdev_nbd 00:07:07.375 09:24:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.375 09:24:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:07.375 ************************************ 00:07:07.632 09:24:35 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:07:07.632 09:24:35 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:07:07.633 skipping fio tests on NVMe due to multi-ns failures. 00:07:07.633 09:24:35 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:07:07.633 09:24:35 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:07.633 09:24:35 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:07.633 09:24:35 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:07.633 09:24:35 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:07.633 09:24:35 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.633 09:24:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:07.633 ************************************ 00:07:07.633 START TEST bdev_verify 00:07:07.633 ************************************ 00:07:07.633 09:24:35 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:07.633 [2024-11-29 09:24:35.184960] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:07.633 [2024-11-29 09:24:35.185076] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75285 ] 00:07:07.633 [2024-11-29 09:24:35.319268] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:07.633 [2024-11-29 09:24:35.349379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:07.889 [2024-11-29 09:24:35.376581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.889 [2024-11-29 09:24:35.376627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.146 Running I/O for 5 seconds... 00:07:10.453 25536.00 IOPS, 99.75 MiB/s [2024-11-29T09:24:39.140Z] 23904.00 IOPS, 93.38 MiB/s [2024-11-29T09:24:40.073Z] 24490.67 IOPS, 95.67 MiB/s [2024-11-29T09:24:41.011Z] 24544.00 IOPS, 95.88 MiB/s [2024-11-29T09:24:41.011Z] 24550.40 IOPS, 95.90 MiB/s 00:07:13.285 Latency(us) 00:07:13.285 [2024-11-29T09:24:41.011Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:13.285 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:13.285 Verification LBA range: start 0x0 length 0xbd0bd 00:07:13.285 Nvme0n1 : 5.06 1873.09 7.32 0.00 0.00 68177.31 14115.45 74610.22 00:07:13.285 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:13.285 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:13.285 Nvme0n1 : 5.07 1577.03 6.16 0.00 0.00 80763.20 8620.50 82272.89 00:07:13.285 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:13.285 Verification LBA range: start 0x0 length 0x4ff80 00:07:13.285 Nvme1n1p1 : 5.06 1872.64 7.32 0.00 0.00 68075.64 15627.82 67350.84 00:07:13.286 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:13.286 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:13.286 Nvme1n1p1 : 5.09 1585.25 6.19 0.00 0.00 80361.34 11292.36 71383.83 00:07:13.286 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:13.286 Verification LBA range: start 0x0 length 0x4ff7f 00:07:13.286 Nvme1n1p2 : 5.06 1872.21 7.31 0.00 0.00 67983.79 14720.39 61704.66 00:07:13.286 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:13.286 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:13.286 Nvme1n1p2 : 5.09 1584.80 6.19 0.00 0.00 80233.92 11292.36 71787.13 00:07:13.286 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:13.286 Verification LBA range: start 0x0 length 0x80000 00:07:13.286 Nvme2n1 : 5.06 1871.83 7.31 0.00 0.00 67881.34 14518.74 59284.87 00:07:13.286 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:13.286 Verification LBA range: start 0x80000 length 0x80000 00:07:13.286 Nvme2n1 : 5.09 1584.38 6.19 0.00 0.00 80066.81 11645.24 68560.74 00:07:13.286 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:13.286 Verification LBA range: start 0x0 length 0x80000 00:07:13.286 Nvme2n2 : 5.06 1871.42 7.31 0.00 0.00 67766.05 13812.97 59688.17 00:07:13.286 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:13.286 Verification LBA range: start 0x80000 length 0x80000 00:07:13.286 Nvme2n2 : 5.09 1583.96 6.19 0.00 0.00 79883.78 11998.13 72190.42 00:07:13.286 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:13.286 Verification LBA range: start 0x0 length 0x80000 00:07:13.286 Nvme2n3 : 5.06 1871.02 7.31 0.00 0.00 67656.78 13409.67 62511.26 00:07:13.286 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:13.286 Verification LBA range: start 0x80000 length 0x80000 00:07:13.286 Nvme2n3 : 5.09 1583.54 6.19 0.00 0.00 79734.94 12351.02 75013.51 00:07:13.286 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:13.286 Verification LBA range: start 0x0 length 0x20000 00:07:13.286 Nvme3n1 : 5.08 1878.74 7.34 0.00 0.00 67235.61 7612.26 64124.46 00:07:13.286 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:13.286 Verification LBA range: start 0x20000 length 0x20000 00:07:13.286 Nvme3n1 : 5.09 1583.13 6.18 0.00 0.00 79668.34 10536.17 73400.32 00:07:13.286 [2024-11-29T09:24:41.012Z] =================================================================================================================== 00:07:13.286 [2024-11-29T09:24:41.012Z] Total : 24193.03 94.50 0.00 0.00 73463.58 7612.26 82272.89 00:07:13.858 00:07:13.858 real 0m6.461s 00:07:13.858 user 0m12.182s 00:07:13.858 sys 0m0.238s 00:07:13.858 ************************************ 00:07:13.858 END TEST bdev_verify 00:07:13.858 ************************************ 00:07:13.858 09:24:41 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.858 09:24:41 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:14.118 09:24:41 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:14.118 09:24:41 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:14.118 09:24:41 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.118 09:24:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:14.118 ************************************ 00:07:14.118 START TEST bdev_verify_big_io 00:07:14.118 ************************************ 00:07:14.118 09:24:41 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:14.118 [2024-11-29 09:24:41.694562] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:14.118 [2024-11-29 09:24:41.694679] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75378 ] 00:07:14.118 [2024-11-29 09:24:41.826794] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:14.379 [2024-11-29 09:24:41.856042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:14.379 [2024-11-29 09:24:41.877193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.379 [2024-11-29 09:24:41.877269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.639 Running I/O for 5 seconds... 00:07:19.069 639.00 IOPS, 39.94 MiB/s [2024-11-29T09:24:48.171Z] 2089.50 IOPS, 130.59 MiB/s [2024-11-29T09:24:48.737Z] 2687.00 IOPS, 167.94 MiB/s [2024-11-29T09:24:48.737Z] 2936.50 IOPS, 183.53 MiB/s 00:07:21.011 Latency(us) 00:07:21.011 [2024-11-29T09:24:48.737Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:21.011 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:21.011 Verification LBA range: start 0x0 length 0xbd0b 00:07:21.011 Nvme0n1 : 5.75 120.95 7.56 0.00 0.00 1002559.27 13006.38 1574477.19 00:07:21.011 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:21.011 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:21.011 Nvme0n1 : 6.02 93.00 5.81 0.00 0.00 1266448.09 13812.97 1568024.42 00:07:21.011 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:21.011 Verification LBA range: start 0x0 length 0x4ff8 00:07:21.011 Nvme1n1p1 : 5.83 126.99 7.94 0.00 0.00 931320.24 87515.77 1367988.38 00:07:21.011 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:21.011 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:21.011 Nvme1n1p1 : 5.83 143.08 8.94 0.00 0.00 824307.96 72190.42 974369.08 00:07:21.011 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:21.011 Verification LBA range: start 0x0 length 0x4ff7 00:07:21.011 Nvme1n1p2 : 5.83 131.73 8.23 0.00 0.00 880520.27 70980.53 1174405.12 00:07:21.011 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:21.011 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:21.011 Nvme1n1p2 : 5.94 146.22 9.14 0.00 0.00 777178.18 67754.14 803370.54 00:07:21.011 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:21.011 Verification LBA range: start 0x0 length 0x8000 00:07:21.011 Nvme2n1 : 5.94 132.86 8.30 0.00 0.00 837813.45 72593.72 980821.86 00:07:21.011 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:21.011 Verification LBA range: start 0x8000 length 0x8000 00:07:21.011 Nvme2n1 : 5.94 150.72 9.42 0.00 0.00 736577.83 68157.44 784012.21 00:07:21.011 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:21.011 Verification LBA range: start 0x0 length 0x8000 00:07:21.011 Nvme2n2 : 6.04 134.88 8.43 0.00 0.00 788843.24 33473.77 1361535.61 00:07:21.011 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:21.011 Verification LBA range: start 0x8000 length 0x8000 00:07:21.011 Nvme2n2 : 6.04 158.97 9.94 0.00 0.00 685269.25 14417.92 803370.54 00:07:21.011 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:21.011 Verification LBA range: start 0x0 length 0x8000 00:07:21.011 Nvme2n3 : 6.17 160.21 10.01 0.00 0.00 640275.85 12703.90 1367988.38 00:07:21.011 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:21.011 Verification LBA range: start 0x8000 length 0x8000 00:07:21.011 Nvme2n3 : 6.10 164.27 10.27 0.00 0.00 647233.05 22282.24 819502.47 00:07:21.011 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:21.011 Verification LBA range: start 0x0 length 0x2000 00:07:21.011 Nvme3n1 : 6.32 258.02 16.13 0.00 0.00 382886.75 370.22 1400252.26 00:07:21.011 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:21.011 Verification LBA range: start 0x2000 length 0x2000 00:07:21.011 Nvme3n1 : 6.10 178.21 11.14 0.00 0.00 586574.55 920.02 832408.02 00:07:21.011 [2024-11-29T09:24:48.737Z] =================================================================================================================== 00:07:21.011 [2024-11-29T09:24:48.737Z] Total : 2100.11 131.26 0.00 0.00 736081.48 370.22 1574477.19 00:07:21.945 00:07:21.945 real 0m8.016s 00:07:21.945 user 0m15.294s 00:07:21.945 sys 0m0.232s 00:07:21.945 09:24:49 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.945 09:24:49 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:21.945 ************************************ 00:07:21.945 END TEST bdev_verify_big_io 00:07:21.945 ************************************ 00:07:22.203 09:24:49 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:22.203 09:24:49 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:22.203 09:24:49 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.203 09:24:49 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:22.203 ************************************ 00:07:22.203 START TEST bdev_write_zeroes 00:07:22.203 ************************************ 00:07:22.203 09:24:49 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:22.203 [2024-11-29 09:24:49.754314] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:22.203 [2024-11-29 09:24:49.754436] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75487 ] 00:07:22.203 [2024-11-29 09:24:49.892291] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:22.203 [2024-11-29 09:24:49.919047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.461 [2024-11-29 09:24:49.938571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.719 Running I/O for 1 seconds... 00:07:23.912 69888.00 IOPS, 273.00 MiB/s 00:07:23.912 Latency(us) 00:07:23.912 [2024-11-29T09:24:51.638Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:23.912 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:23.912 Nvme0n1 : 1.02 9937.05 38.82 0.00 0.00 12852.56 11443.59 24298.73 00:07:23.913 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:23.913 Nvme1n1p1 : 1.03 9924.88 38.77 0.00 0.00 12847.11 11393.18 23996.26 00:07:23.913 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:23.913 Nvme1n1p2 : 1.03 9912.67 38.72 0.00 0.00 12827.49 11191.53 23290.49 00:07:23.913 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:23.913 Nvme2n1 : 1.03 9901.30 38.68 0.00 0.00 12812.99 11342.77 22584.71 00:07:23.913 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:23.913 Nvme2n2 : 1.03 9890.11 38.63 0.00 0.00 12795.43 11393.18 22181.42 00:07:23.913 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:23.913 Nvme2n3 : 1.03 9878.92 38.59 0.00 0.00 12773.05 10939.47 22786.36 00:07:23.913 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:23.913 Nvme3n1 : 1.03 9867.64 38.55 0.00 0.00 12769.81 10132.87 24601.21 00:07:23.913 [2024-11-29T09:24:51.639Z] =================================================================================================================== 00:07:23.913 [2024-11-29T09:24:51.639Z] Total : 69312.57 270.75 0.00 0.00 12811.21 10132.87 24601.21 00:07:23.913 00:07:23.913 real 0m1.887s 00:07:23.913 user 0m1.586s 00:07:23.913 sys 0m0.191s 00:07:23.913 09:24:51 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.913 09:24:51 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:23.913 ************************************ 00:07:23.913 END TEST bdev_write_zeroes 00:07:23.913 ************************************ 00:07:23.913 09:24:51 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:23.913 09:24:51 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:23.913 09:24:51 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.913 09:24:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:23.913 ************************************ 00:07:23.913 START TEST bdev_json_nonenclosed 00:07:23.913 ************************************ 00:07:23.913 09:24:51 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:24.175 [2024-11-29 09:24:51.687255] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:24.175 [2024-11-29 09:24:51.687373] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75529 ] 00:07:24.175 [2024-11-29 09:24:51.819895] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:24.175 [2024-11-29 09:24:51.851580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.175 [2024-11-29 09:24:51.875829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.175 [2024-11-29 09:24:51.875912] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:24.175 [2024-11-29 09:24:51.875932] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:24.175 [2024-11-29 09:24:51.875941] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:24.436 00:07:24.436 real 0m0.325s 00:07:24.436 user 0m0.119s 00:07:24.436 sys 0m0.101s 00:07:24.436 09:24:51 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.436 ************************************ 00:07:24.436 END TEST bdev_json_nonenclosed 00:07:24.436 ************************************ 00:07:24.436 09:24:51 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:24.436 09:24:51 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:24.436 09:24:51 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:24.436 09:24:51 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.436 09:24:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:24.436 ************************************ 00:07:24.436 START TEST bdev_json_nonarray 00:07:24.436 ************************************ 00:07:24.436 09:24:52 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:24.436 [2024-11-29 09:24:52.069645] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:24.436 [2024-11-29 09:24:52.069757] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75549 ] 00:07:24.694 [2024-11-29 09:24:52.201545] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:24.694 [2024-11-29 09:24:52.229810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.694 [2024-11-29 09:24:52.253657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.694 [2024-11-29 09:24:52.253762] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:24.694 [2024-11-29 09:24:52.253780] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:24.694 [2024-11-29 09:24:52.253791] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:24.694 00:07:24.694 real 0m0.318s 00:07:24.694 user 0m0.110s 00:07:24.694 sys 0m0.106s 00:07:24.694 09:24:52 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.694 ************************************ 00:07:24.694 END TEST bdev_json_nonarray 00:07:24.694 09:24:52 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:24.694 ************************************ 00:07:24.694 09:24:52 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:07:24.694 09:24:52 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:07:24.694 09:24:52 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:24.694 09:24:52 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:24.694 09:24:52 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.694 09:24:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:24.694 ************************************ 00:07:24.694 START TEST bdev_gpt_uuid 00:07:24.694 ************************************ 00:07:24.694 09:24:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:07:24.694 09:24:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:07:24.694 09:24:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:07:24.694 09:24:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=75569 00:07:24.694 09:24:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:24.694 09:24:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 75569 00:07:24.694 09:24:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 75569 ']' 00:07:24.694 09:24:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.694 09:24:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:24.694 09:24:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:24.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.694 09:24:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.694 09:24:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:24.694 09:24:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:24.953 [2024-11-29 09:24:52.452451] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:24.953 [2024-11-29 09:24:52.452571] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75569 ] 00:07:24.953 [2024-11-29 09:24:52.585669] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:24.953 [2024-11-29 09:24:52.616796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.953 [2024-11-29 09:24:52.640227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.889 09:24:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:25.889 09:24:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:07:25.889 09:24:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:25.889 09:24:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.889 09:24:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:26.149 Some configs were skipped because the RPC state that can call them passed over. 00:07:26.149 09:24:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.149 09:24:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:07:26.149 09:24:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.149 09:24:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:26.149 09:24:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.149 09:24:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:26.149 09:24:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.149 09:24:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:26.149 09:24:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.150 09:24:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:07:26.150 { 00:07:26.150 "name": "Nvme1n1p1", 00:07:26.150 "aliases": [ 00:07:26.150 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:26.150 ], 00:07:26.150 "product_name": "GPT Disk", 00:07:26.150 "block_size": 4096, 00:07:26.150 "num_blocks": 655104, 00:07:26.150 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:26.150 "assigned_rate_limits": { 00:07:26.150 "rw_ios_per_sec": 0, 00:07:26.150 "rw_mbytes_per_sec": 0, 00:07:26.150 "r_mbytes_per_sec": 0, 00:07:26.150 "w_mbytes_per_sec": 0 00:07:26.150 }, 00:07:26.150 "claimed": false, 00:07:26.150 "zoned": false, 00:07:26.150 "supported_io_types": { 00:07:26.150 "read": true, 00:07:26.150 "write": true, 00:07:26.150 "unmap": true, 00:07:26.150 "flush": true, 00:07:26.150 "reset": true, 00:07:26.150 "nvme_admin": false, 00:07:26.150 "nvme_io": false, 00:07:26.150 "nvme_io_md": false, 00:07:26.150 "write_zeroes": true, 00:07:26.150 "zcopy": false, 00:07:26.150 "get_zone_info": false, 00:07:26.150 "zone_management": false, 00:07:26.150 "zone_append": false, 00:07:26.150 "compare": true, 00:07:26.150 "compare_and_write": false, 00:07:26.150 "abort": true, 00:07:26.150 "seek_hole": false, 00:07:26.150 "seek_data": false, 00:07:26.150 "copy": true, 00:07:26.150 "nvme_iov_md": false 00:07:26.150 }, 00:07:26.150 "driver_specific": { 00:07:26.150 "gpt": { 00:07:26.150 "base_bdev": "Nvme1n1", 00:07:26.150 "offset_blocks": 256, 00:07:26.150 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:26.150 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:26.150 "partition_name": "SPDK_TEST_first" 00:07:26.150 } 00:07:26.150 } 00:07:26.150 } 00:07:26.150 ]' 00:07:26.150 09:24:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:07:26.150 09:24:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:07:26.150 09:24:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:07:26.150 09:24:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:26.150 09:24:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:26.150 09:24:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:26.150 09:24:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:26.150 09:24:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.150 09:24:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:26.150 09:24:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.150 09:24:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:07:26.150 { 00:07:26.150 "name": "Nvme1n1p2", 00:07:26.150 "aliases": [ 00:07:26.150 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:26.150 ], 00:07:26.150 "product_name": "GPT Disk", 00:07:26.150 "block_size": 4096, 00:07:26.150 "num_blocks": 655103, 00:07:26.150 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:26.150 "assigned_rate_limits": { 00:07:26.150 "rw_ios_per_sec": 0, 00:07:26.150 "rw_mbytes_per_sec": 0, 00:07:26.150 "r_mbytes_per_sec": 0, 00:07:26.150 "w_mbytes_per_sec": 0 00:07:26.150 }, 00:07:26.150 "claimed": false, 00:07:26.150 "zoned": false, 00:07:26.150 "supported_io_types": { 00:07:26.150 "read": true, 00:07:26.150 "write": true, 00:07:26.150 "unmap": true, 00:07:26.150 "flush": true, 00:07:26.150 "reset": true, 00:07:26.150 "nvme_admin": false, 00:07:26.150 "nvme_io": false, 00:07:26.150 "nvme_io_md": false, 00:07:26.150 "write_zeroes": true, 00:07:26.150 "zcopy": false, 00:07:26.150 "get_zone_info": false, 00:07:26.150 "zone_management": false, 00:07:26.150 "zone_append": false, 00:07:26.150 "compare": true, 00:07:26.150 "compare_and_write": false, 00:07:26.150 "abort": true, 00:07:26.150 "seek_hole": false, 00:07:26.150 "seek_data": false, 00:07:26.150 "copy": true, 00:07:26.150 "nvme_iov_md": false 00:07:26.150 }, 00:07:26.150 "driver_specific": { 00:07:26.150 "gpt": { 00:07:26.150 "base_bdev": "Nvme1n1", 00:07:26.150 "offset_blocks": 655360, 00:07:26.150 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:26.150 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:26.150 "partition_name": "SPDK_TEST_second" 00:07:26.150 } 00:07:26.150 } 00:07:26.150 } 00:07:26.150 ]' 00:07:26.150 09:24:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:07:26.150 09:24:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:07:26.150 09:24:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:07:26.150 09:24:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:26.150 09:24:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:26.150 09:24:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:26.150 09:24:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 75569 00:07:26.150 09:24:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 75569 ']' 00:07:26.150 09:24:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 75569 00:07:26.150 09:24:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:07:26.150 09:24:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:26.150 09:24:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75569 00:07:26.150 09:24:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:26.150 09:24:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:26.150 killing process with pid 75569 00:07:26.150 09:24:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75569' 00:07:26.150 09:24:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 75569 00:07:26.150 09:24:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 75569 00:07:26.718 00:07:26.718 real 0m1.841s 00:07:26.718 user 0m1.987s 00:07:26.718 sys 0m0.362s 00:07:26.718 09:24:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.718 ************************************ 00:07:26.718 END TEST bdev_gpt_uuid 00:07:26.718 ************************************ 00:07:26.718 09:24:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:26.718 09:24:54 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:07:26.718 09:24:54 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:07:26.718 09:24:54 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:07:26.718 09:24:54 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:26.718 09:24:54 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:26.718 09:24:54 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:26.718 09:24:54 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:26.718 09:24:54 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:26.718 09:24:54 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:26.975 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:27.233 Waiting for block devices as requested 00:07:27.233 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:27.233 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:27.233 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:27.503 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:32.838 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:32.838 09:25:00 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:32.838 09:25:00 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:32.838 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:32.838 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:32.838 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:32.838 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:32.838 09:25:00 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:32.838 00:07:32.838 real 0m49.856s 00:07:32.838 user 1m3.069s 00:07:32.838 sys 0m8.127s 00:07:32.838 09:25:00 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.838 ************************************ 00:07:32.838 END TEST blockdev_nvme_gpt 00:07:32.838 ************************************ 00:07:32.838 09:25:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:32.838 09:25:00 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:32.838 09:25:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:32.838 09:25:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.838 09:25:00 -- common/autotest_common.sh@10 -- # set +x 00:07:32.838 ************************************ 00:07:32.838 START TEST nvme 00:07:32.838 ************************************ 00:07:32.838 09:25:00 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:33.097 * Looking for test storage... 00:07:33.097 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:33.097 09:25:00 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:33.097 09:25:00 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:07:33.097 09:25:00 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:33.097 09:25:00 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:33.097 09:25:00 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:33.097 09:25:00 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:33.097 09:25:00 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:33.097 09:25:00 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:33.097 09:25:00 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:33.097 09:25:00 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:33.097 09:25:00 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:33.097 09:25:00 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:33.097 09:25:00 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:33.097 09:25:00 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:33.097 09:25:00 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:33.097 09:25:00 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:33.097 09:25:00 nvme -- scripts/common.sh@345 -- # : 1 00:07:33.097 09:25:00 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:33.097 09:25:00 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:33.097 09:25:00 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:33.097 09:25:00 nvme -- scripts/common.sh@353 -- # local d=1 00:07:33.097 09:25:00 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:33.097 09:25:00 nvme -- scripts/common.sh@355 -- # echo 1 00:07:33.097 09:25:00 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:33.097 09:25:00 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:33.097 09:25:00 nvme -- scripts/common.sh@353 -- # local d=2 00:07:33.097 09:25:00 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:33.097 09:25:00 nvme -- scripts/common.sh@355 -- # echo 2 00:07:33.097 09:25:00 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:33.097 09:25:00 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:33.097 09:25:00 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:33.097 09:25:00 nvme -- scripts/common.sh@368 -- # return 0 00:07:33.097 09:25:00 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:33.097 09:25:00 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:33.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.097 --rc genhtml_branch_coverage=1 00:07:33.097 --rc genhtml_function_coverage=1 00:07:33.097 --rc genhtml_legend=1 00:07:33.097 --rc geninfo_all_blocks=1 00:07:33.097 --rc geninfo_unexecuted_blocks=1 00:07:33.097 00:07:33.097 ' 00:07:33.097 09:25:00 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:33.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.097 --rc genhtml_branch_coverage=1 00:07:33.097 --rc genhtml_function_coverage=1 00:07:33.097 --rc genhtml_legend=1 00:07:33.097 --rc geninfo_all_blocks=1 00:07:33.097 --rc geninfo_unexecuted_blocks=1 00:07:33.097 00:07:33.097 ' 00:07:33.097 09:25:00 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:33.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.097 --rc genhtml_branch_coverage=1 00:07:33.097 --rc genhtml_function_coverage=1 00:07:33.097 --rc genhtml_legend=1 00:07:33.097 --rc geninfo_all_blocks=1 00:07:33.097 --rc geninfo_unexecuted_blocks=1 00:07:33.097 00:07:33.097 ' 00:07:33.097 09:25:00 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:33.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.097 --rc genhtml_branch_coverage=1 00:07:33.097 --rc genhtml_function_coverage=1 00:07:33.097 --rc genhtml_legend=1 00:07:33.097 --rc geninfo_all_blocks=1 00:07:33.097 --rc geninfo_unexecuted_blocks=1 00:07:33.097 00:07:33.097 ' 00:07:33.097 09:25:00 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:33.668 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:34.239 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:34.239 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:34.239 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:34.239 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:34.239 09:25:01 nvme -- nvme/nvme.sh@79 -- # uname 00:07:34.239 09:25:01 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:34.239 09:25:01 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:34.239 09:25:01 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:34.239 09:25:01 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:34.239 09:25:01 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:07:34.239 09:25:01 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:07:34.239 Waiting for stub to ready for secondary processes... 00:07:34.239 09:25:01 nvme -- common/autotest_common.sh@1075 -- # stubpid=76193 00:07:34.239 09:25:01 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:07:34.239 09:25:01 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:34.239 09:25:01 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/76193 ]] 00:07:34.239 09:25:01 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:34.239 09:25:01 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:34.239 [2024-11-29 09:25:01.838913] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:34.239 [2024-11-29 09:25:01.839026] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:35.181 [2024-11-29 09:25:02.766718] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:35.181 [2024-11-29 09:25:02.798483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:35.181 09:25:02 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:35.181 09:25:02 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/76193 ]] 00:07:35.181 09:25:02 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:35.181 [2024-11-29 09:25:02.812869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:35.181 [2024-11-29 09:25:02.813156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.181 [2024-11-29 09:25:02.813174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:35.181 [2024-11-29 09:25:02.832327] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:35.181 [2024-11-29 09:25:02.832429] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:35.181 [2024-11-29 09:25:02.847353] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:35.181 [2024-11-29 09:25:02.847496] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:35.181 [2024-11-29 09:25:02.848233] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:35.181 [2024-11-29 09:25:02.848371] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:35.181 [2024-11-29 09:25:02.848414] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:35.181 [2024-11-29 09:25:02.849849] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:35.181 [2024-11-29 09:25:02.849992] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:35.181 [2024-11-29 09:25:02.850041] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:35.181 [2024-11-29 09:25:02.851200] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:35.181 [2024-11-29 09:25:02.851343] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:35.181 [2024-11-29 09:25:02.851389] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:35.181 [2024-11-29 09:25:02.851426] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:35.181 [2024-11-29 09:25:02.851511] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:36.124 done. 00:07:36.124 09:25:03 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:36.124 09:25:03 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:07:36.124 09:25:03 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:36.124 09:25:03 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:07:36.124 09:25:03 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.124 09:25:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:36.124 ************************************ 00:07:36.124 START TEST nvme_reset 00:07:36.124 ************************************ 00:07:36.124 09:25:03 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:36.386 Initializing NVMe Controllers 00:07:36.386 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:36.386 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:36.386 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:36.386 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:36.386 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:36.386 ************************************ 00:07:36.386 END TEST nvme_reset 00:07:36.386 ************************************ 00:07:36.386 00:07:36.386 real 0m0.228s 00:07:36.386 user 0m0.078s 00:07:36.386 sys 0m0.093s 00:07:36.386 09:25:04 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.386 09:25:04 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:36.386 09:25:04 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:36.386 09:25:04 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:36.386 09:25:04 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.386 09:25:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:36.650 ************************************ 00:07:36.650 START TEST nvme_identify 00:07:36.650 ************************************ 00:07:36.650 09:25:04 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:07:36.650 09:25:04 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:36.650 09:25:04 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:36.650 09:25:04 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:36.650 09:25:04 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:36.650 09:25:04 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:36.650 09:25:04 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:07:36.650 09:25:04 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:36.650 09:25:04 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:36.650 09:25:04 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:36.650 09:25:04 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:36.650 09:25:04 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:36.650 09:25:04 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:36.650 [2024-11-29 09:25:04.351551] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 76226 terminated unexpected 00:07:36.650 ===================================================== 00:07:36.650 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:36.650 ===================================================== 00:07:36.650 Controller Capabilities/Features 00:07:36.650 ================================ 00:07:36.650 Vendor ID: 1b36 00:07:36.650 Subsystem Vendor ID: 1af4 00:07:36.650 Serial Number: 12340 00:07:36.650 Model Number: QEMU NVMe Ctrl 00:07:36.650 Firmware Version: 8.0.0 00:07:36.650 Recommended Arb Burst: 6 00:07:36.650 IEEE OUI Identifier: 00 54 52 00:07:36.650 Multi-path I/O 00:07:36.650 May have multiple subsystem ports: No 00:07:36.650 May have multiple controllers: No 00:07:36.650 Associated with SR-IOV VF: No 00:07:36.650 Max Data Transfer Size: 524288 00:07:36.650 Max Number of Namespaces: 256 00:07:36.650 Max Number of I/O Queues: 64 00:07:36.650 NVMe Specification Version (VS): 1.4 00:07:36.650 NVMe Specification Version (Identify): 1.4 00:07:36.650 Maximum Queue Entries: 2048 00:07:36.650 Contiguous Queues Required: Yes 00:07:36.650 Arbitration Mechanisms Supported 00:07:36.650 Weighted Round Robin: Not Supported 00:07:36.650 Vendor Specific: Not Supported 00:07:36.650 Reset Timeout: 7500 ms 00:07:36.650 Doorbell Stride: 4 bytes 00:07:36.650 NVM Subsystem Reset: Not Supported 00:07:36.650 Command Sets Supported 00:07:36.650 NVM Command Set: Supported 00:07:36.650 Boot Partition: Not Supported 00:07:36.650 Memory Page Size Minimum: 4096 bytes 00:07:36.650 Memory Page Size Maximum: 65536 bytes 00:07:36.650 Persistent Memory Region: Not Supported 00:07:36.650 Optional Asynchronous Events Supported 00:07:36.650 Namespace Attribute Notices: Supported 00:07:36.650 Firmware Activation Notices: Not Supported 00:07:36.650 ANA Change Notices: Not Supported 00:07:36.650 PLE Aggregate Log Change Notices: Not Supported 00:07:36.650 LBA Status Info Alert Notices: Not Supported 00:07:36.650 EGE Aggregate Log Change Notices: Not Supported 00:07:36.650 Normal NVM Subsystem Shutdown event: Not Supported 00:07:36.650 Zone Descriptor Change Notices: Not Supported 00:07:36.650 Discovery Log Change Notices: Not Supported 00:07:36.650 Controller Attributes 00:07:36.650 128-bit Host Identifier: Not Supported 00:07:36.650 Non-Operational Permissive Mode: Not Supported 00:07:36.650 NVM Sets: Not Supported 00:07:36.650 Read Recovery Levels: Not Supported 00:07:36.650 Endurance Groups: Not Supported 00:07:36.650 Predictable Latency Mode: Not Supported 00:07:36.650 Traffic Based Keep ALive: Not Supported 00:07:36.650 Namespace Granularity: Not Supported 00:07:36.650 SQ Associations: Not Supported 00:07:36.650 UUID List: Not Supported 00:07:36.650 Multi-Domain Subsystem: Not Supported 00:07:36.650 Fixed Capacity Management: Not Supported 00:07:36.650 Variable Capacity Management: Not Supported 00:07:36.650 Delete Endurance Group: Not Supported 00:07:36.650 Delete NVM Set: Not Supported 00:07:36.650 Extended LBA Formats Supported: Supported 00:07:36.650 Flexible Data Placement Supported: Not Supported 00:07:36.650 00:07:36.650 Controller Memory Buffer Support 00:07:36.650 ================================ 00:07:36.650 Supported: No 00:07:36.650 00:07:36.650 Persistent Memory Region Support 00:07:36.650 ================================ 00:07:36.650 Supported: No 00:07:36.650 00:07:36.650 Admin Command Set Attributes 00:07:36.650 ============================ 00:07:36.650 Security Send/Receive: Not Supported 00:07:36.650 Format NVM: Supported 00:07:36.650 Firmware Activate/Download: Not Supported 00:07:36.650 Namespace Management: Supported 00:07:36.650 Device Self-Test: Not Supported 00:07:36.650 Directives: Supported 00:07:36.650 NVMe-MI: Not Supported 00:07:36.650 Virtualization Management: Not Supported 00:07:36.650 Doorbell Buffer Config: Supported 00:07:36.650 Get LBA Status Capability: Not Supported 00:07:36.650 Command & Feature Lockdown Capability: Not Supported 00:07:36.650 Abort Command Limit: 4 00:07:36.650 Async Event Request Limit: 4 00:07:36.650 Number of Firmware Slots: N/A 00:07:36.650 Firmware Slot 1 Read-Only: N/A 00:07:36.650 Firmware Activation Without Reset: N/A 00:07:36.650 Multiple Update Detection Support: N/A 00:07:36.650 Firmware Update Granularity: No Information Provided 00:07:36.650 Per-Namespace SMART Log: Yes 00:07:36.650 Asymmetric Namespace Access Log Page: Not Supported 00:07:36.650 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:36.650 Command Effects Log Page: Supported 00:07:36.650 Get Log Page Extended Data: Supported 00:07:36.650 Telemetry Log Pages: Not Supported 00:07:36.650 Persistent Event Log Pages: Not Supported 00:07:36.650 Supported Log Pages Log Page: May Support 00:07:36.650 Commands Supported & Effects Log Page: Not Supported 00:07:36.650 Feature Identifiers & Effects Log Page:May Support 00:07:36.650 NVMe-MI Commands & Effects Log Page: May Support 00:07:36.650 Data Area 4 for Telemetry Log: Not Supported 00:07:36.650 Error Log Page Entries Supported: 1 00:07:36.650 Keep Alive: Not Supported 00:07:36.650 00:07:36.650 NVM Command Set Attributes 00:07:36.650 ========================== 00:07:36.650 Submission Queue Entry Size 00:07:36.650 Max: 64 00:07:36.650 Min: 64 00:07:36.650 Completion Queue Entry Size 00:07:36.650 Max: 16 00:07:36.650 Min: 16 00:07:36.650 Number of Namespaces: 256 00:07:36.650 Compare Command: Supported 00:07:36.650 Write Uncorrectable Command: Not Supported 00:07:36.650 Dataset Management Command: Supported 00:07:36.650 Write Zeroes Command: Supported 00:07:36.650 Set Features Save Field: Supported 00:07:36.650 Reservations: Not Supported 00:07:36.650 Timestamp: Supported 00:07:36.650 Copy: Supported 00:07:36.650 Volatile Write Cache: Present 00:07:36.650 Atomic Write Unit (Normal): 1 00:07:36.650 Atomic Write Unit (PFail): 1 00:07:36.650 Atomic Compare & Write Unit: 1 00:07:36.650 Fused Compare & Write: Not Supported 00:07:36.650 Scatter-Gather List 00:07:36.650 SGL Command Set: Supported 00:07:36.651 SGL Keyed: Not Supported 00:07:36.651 SGL Bit Bucket Descriptor: Not Supported 00:07:36.651 SGL Metadata Pointer: Not Supported 00:07:36.651 Oversized SGL: Not Supported 00:07:36.651 SGL Metadata Address: Not Supported 00:07:36.651 SGL Offset: Not Supported 00:07:36.651 Transport SGL Data Block: Not Supported 00:07:36.651 Replay Protected Memory Block: Not Supported 00:07:36.651 00:07:36.651 Firmware Slot Information 00:07:36.651 ========================= 00:07:36.651 Active slot: 1 00:07:36.651 Slot 1 Firmware Revision: 1.0 00:07:36.651 00:07:36.651 00:07:36.651 Commands Supported and Effects 00:07:36.651 ============================== 00:07:36.651 Admin Commands 00:07:36.651 -------------- 00:07:36.651 Delete I/O Submission Queue (00h): Supported 00:07:36.651 Create I/O Submission Queue (01h): Supported 00:07:36.651 Get Log Page (02h): Supported 00:07:36.651 Delete I/O Completion Queue (04h): Supported 00:07:36.651 Create I/O Completion Queue (05h): Supported 00:07:36.651 Identify (06h): Supported 00:07:36.651 Abort (08h): Supported 00:07:36.651 Set Features (09h): Supported 00:07:36.651 Get Features (0Ah): Supported 00:07:36.651 Asynchronous Event Request (0Ch): Supported 00:07:36.651 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:36.651 Directive Send (19h): Supported 00:07:36.651 Directive Receive (1Ah): Supported 00:07:36.651 Virtualization Management (1Ch): Supported 00:07:36.651 Doorbell Buffer Config (7Ch): Supported 00:07:36.651 Format NVM (80h): Supported LBA-Change 00:07:36.651 I/O Commands 00:07:36.651 ------------ 00:07:36.651 Flush (00h): Supported LBA-Change 00:07:36.651 Write (01h): Supported LBA-Change 00:07:36.651 Read (02h): Supported 00:07:36.651 Compare (05h): Supported 00:07:36.651 Write Zeroes (08h): Supported LBA-Change 00:07:36.651 Dataset Management (09h): Supported LBA-Change 00:07:36.651 Unknown (0Ch): Supported 00:07:36.651 Unknown (12h): Supported 00:07:36.651 Copy (19h): Supported LBA-Change 00:07:36.651 Unknown (1Dh): Supported LBA-Change 00:07:36.651 00:07:36.651 Error Log 00:07:36.651 ========= 00:07:36.651 00:07:36.651 Arbitration 00:07:36.651 =========== 00:07:36.651 Arbitration Burst: no limit 00:07:36.651 00:07:36.651 Power Management 00:07:36.651 ================ 00:07:36.651 Number of Power States: 1 00:07:36.651 Current Power State: Power State #0 00:07:36.651 Power State #0: 00:07:36.651 Max Power: 25.00 W 00:07:36.651 Non-Operational State: Operational 00:07:36.651 Entry Latency: 16 microseconds 00:07:36.651 Exit Latency: 4 microseconds 00:07:36.651 Relative Read Throughput: 0 00:07:36.651 Relative Read Latency: 0 00:07:36.651 Relative Write Throughput: 0 00:07:36.651 Relative Write Latency: 0 00:07:36.651 Idle Power[2024-11-29 09:25:04.353259] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 76226 terminated unexpected 00:07:36.651 : Not Reported 00:07:36.651 Active Power: Not Reported 00:07:36.651 Non-Operational Permissive Mode: Not Supported 00:07:36.651 00:07:36.651 Health Information 00:07:36.651 ================== 00:07:36.651 Critical Warnings: 00:07:36.651 Available Spare Space: OK 00:07:36.651 Temperature: OK 00:07:36.651 Device Reliability: OK 00:07:36.651 Read Only: No 00:07:36.651 Volatile Memory Backup: OK 00:07:36.651 Current Temperature: 323 Kelvin (50 Celsius) 00:07:36.651 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:36.651 Available Spare: 0% 00:07:36.651 Available Spare Threshold: 0% 00:07:36.651 Life Percentage Used: 0% 00:07:36.651 Data Units Read: 678 00:07:36.651 Data Units Written: 606 00:07:36.651 Host Read Commands: 38499 00:07:36.651 Host Write Commands: 38285 00:07:36.651 Controller Busy Time: 0 minutes 00:07:36.651 Power Cycles: 0 00:07:36.651 Power On Hours: 0 hours 00:07:36.651 Unsafe Shutdowns: 0 00:07:36.651 Unrecoverable Media Errors: 0 00:07:36.651 Lifetime Error Log Entries: 0 00:07:36.651 Warning Temperature Time: 0 minutes 00:07:36.651 Critical Temperature Time: 0 minutes 00:07:36.651 00:07:36.651 Number of Queues 00:07:36.651 ================ 00:07:36.651 Number of I/O Submission Queues: 64 00:07:36.651 Number of I/O Completion Queues: 64 00:07:36.651 00:07:36.651 ZNS Specific Controller Data 00:07:36.651 ============================ 00:07:36.651 Zone Append Size Limit: 0 00:07:36.651 00:07:36.651 00:07:36.651 Active Namespaces 00:07:36.651 ================= 00:07:36.651 Namespace ID:1 00:07:36.651 Error Recovery Timeout: Unlimited 00:07:36.651 Command Set Identifier: NVM (00h) 00:07:36.651 Deallocate: Supported 00:07:36.651 Deallocated/Unwritten Error: Supported 00:07:36.651 Deallocated Read Value: All 0x00 00:07:36.651 Deallocate in Write Zeroes: Not Supported 00:07:36.651 Deallocated Guard Field: 0xFFFF 00:07:36.651 Flush: Supported 00:07:36.651 Reservation: Not Supported 00:07:36.651 Metadata Transferred as: Separate Metadata Buffer 00:07:36.651 Namespace Sharing Capabilities: Private 00:07:36.651 Size (in LBAs): 1548666 (5GiB) 00:07:36.651 Capacity (in LBAs): 1548666 (5GiB) 00:07:36.651 Utilization (in LBAs): 1548666 (5GiB) 00:07:36.651 Thin Provisioning: Not Supported 00:07:36.651 Per-NS Atomic Units: No 00:07:36.651 Maximum Single Source Range Length: 128 00:07:36.651 Maximum Copy Length: 128 00:07:36.651 Maximum Source Range Count: 128 00:07:36.651 NGUID/EUI64 Never Reused: No 00:07:36.651 Namespace Write Protected: No 00:07:36.651 Number of LBA Formats: 8 00:07:36.651 Current LBA Format: LBA Format #07 00:07:36.651 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:36.651 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:36.651 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:36.651 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:36.651 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:36.651 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:36.651 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:36.651 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:36.651 00:07:36.651 NVM Specific Namespace Data 00:07:36.651 =========================== 00:07:36.651 Logical Block Storage Tag Mask: 0 00:07:36.651 Protection Information Capabilities: 00:07:36.651 16b Guard Protection Information Storage Tag Support: No 00:07:36.651 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:36.651 Storage Tag Check Read Support: No 00:07:36.651 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.651 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.651 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.651 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.651 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.651 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.651 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.651 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.651 ===================================================== 00:07:36.651 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:36.651 ===================================================== 00:07:36.651 Controller Capabilities/Features 00:07:36.651 ================================ 00:07:36.651 Vendor ID: 1b36 00:07:36.651 Subsystem Vendor ID: 1af4 00:07:36.651 Serial Number: 12341 00:07:36.651 Model Number: QEMU NVMe Ctrl 00:07:36.651 Firmware Version: 8.0.0 00:07:36.651 Recommended Arb Burst: 6 00:07:36.651 IEEE OUI Identifier: 00 54 52 00:07:36.651 Multi-path I/O 00:07:36.651 May have multiple subsystem ports: No 00:07:36.651 May have multiple controllers: No 00:07:36.651 Associated with SR-IOV VF: No 00:07:36.651 Max Data Transfer Size: 524288 00:07:36.651 Max Number of Namespaces: 256 00:07:36.651 Max Number of I/O Queues: 64 00:07:36.651 NVMe Specification Version (VS): 1.4 00:07:36.651 NVMe Specification Version (Identify): 1.4 00:07:36.651 Maximum Queue Entries: 2048 00:07:36.651 Contiguous Queues Required: Yes 00:07:36.651 Arbitration Mechanisms Supported 00:07:36.651 Weighted Round Robin: Not Supported 00:07:36.651 Vendor Specific: Not Supported 00:07:36.651 Reset Timeout: 7500 ms 00:07:36.651 Doorbell Stride: 4 bytes 00:07:36.651 NVM Subsystem Reset: Not Supported 00:07:36.651 Command Sets Supported 00:07:36.651 NVM Command Set: Supported 00:07:36.651 Boot Partition: Not Supported 00:07:36.651 Memory Page Size Minimum: 4096 bytes 00:07:36.651 Memory Page Size Maximum: 65536 bytes 00:07:36.651 Persistent Memory Region: Not Supported 00:07:36.651 Optional Asynchronous Events Supported 00:07:36.652 Namespace Attribute Notices: Supported 00:07:36.652 Firmware Activation Notices: Not Supported 00:07:36.652 ANA Change Notices: Not Supported 00:07:36.652 PLE Aggregate Log Change Notices: Not Supported 00:07:36.652 LBA Status Info Alert Notices: Not Supported 00:07:36.652 EGE Aggregate Log Change Notices: Not Supported 00:07:36.652 Normal NVM Subsystem Shutdown event: Not Supported 00:07:36.652 Zone Descriptor Change Notices: Not Supported 00:07:36.652 Discovery Log Change Notices: Not Supported 00:07:36.652 Controller Attributes 00:07:36.652 128-bit Host Identifier: Not Supported 00:07:36.652 Non-Operational Permissive Mode: Not Supported 00:07:36.652 NVM Sets: Not Supported 00:07:36.652 Read Recovery Levels: Not Supported 00:07:36.652 Endurance Groups: Not Supported 00:07:36.652 Predictable Latency Mode: Not Supported 00:07:36.652 Traffic Based Keep ALive: Not Supported 00:07:36.652 Namespace Granularity: Not Supported 00:07:36.652 SQ Associations: Not Supported 00:07:36.652 UUID List: Not Supported 00:07:36.652 Multi-Domain Subsystem: Not Supported 00:07:36.652 Fixed Capacity Management: Not Supported 00:07:36.652 Variable Capacity Management: Not Supported 00:07:36.652 Delete Endurance Group: Not Supported 00:07:36.652 Delete NVM Set: Not Supported 00:07:36.652 Extended LBA Formats Supported: Supported 00:07:36.652 Flexible Data Placement Supported: Not Supported 00:07:36.652 00:07:36.652 Controller Memory Buffer Support 00:07:36.652 ================================ 00:07:36.652 Supported: No 00:07:36.652 00:07:36.652 Persistent Memory Region Support 00:07:36.652 ================================ 00:07:36.652 Supported: No 00:07:36.652 00:07:36.652 Admin Command Set Attributes 00:07:36.652 ============================ 00:07:36.652 Security Send/Receive: Not Supported 00:07:36.652 Format NVM: Supported 00:07:36.652 Firmware Activate/Download: Not Supported 00:07:36.652 Namespace Management: Supported 00:07:36.652 Device Self-Test: Not Supported 00:07:36.652 Directives: Supported 00:07:36.652 NVMe-MI: Not Supported 00:07:36.652 Virtualization Management: Not Supported 00:07:36.652 Doorbell Buffer Config: Supported 00:07:36.652 Get LBA Status Capability: Not Supported 00:07:36.652 Command & Feature Lockdown Capability: Not Supported 00:07:36.652 Abort Command Limit: 4 00:07:36.652 Async Event Request Limit: 4 00:07:36.652 Number of Firmware Slots: N/A 00:07:36.652 Firmware Slot 1 Read-Only: N/A 00:07:36.652 Firmware Activation Without Reset: N/A 00:07:36.652 Multiple Update Detection Support: N/A 00:07:36.652 Firmware Update Granularity: No Information Provided 00:07:36.652 Per-Namespace SMART Log: Yes 00:07:36.652 Asymmetric Namespace Access Log Page: Not Supported 00:07:36.652 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:36.652 Command Effects Log Page: Supported 00:07:36.652 Get Log Page Extended Data: Supported 00:07:36.652 Telemetry Log Pages: Not Supported 00:07:36.652 Persistent Event Log Pages: Not Supported 00:07:36.652 Supported Log Pages Log Page: May Support 00:07:36.652 Commands Supported & Effects Log Page: Not Supported 00:07:36.652 Feature Identifiers & Effects Log Page:May Support 00:07:36.652 NVMe-MI Commands & Effects Log Page: May Support 00:07:36.652 Data Area 4 for Telemetry Log: Not Supported 00:07:36.652 Error Log Page Entries Supported: 1 00:07:36.652 Keep Alive: Not Supported 00:07:36.652 00:07:36.652 NVM Command Set Attributes 00:07:36.652 ========================== 00:07:36.652 Submission Queue Entry Size 00:07:36.652 Max: 64 00:07:36.652 Min: 64 00:07:36.652 Completion Queue Entry Size 00:07:36.652 Max: 16 00:07:36.652 Min: 16 00:07:36.652 Number of Namespaces: 256 00:07:36.652 Compare Command: Supported 00:07:36.652 Write Uncorrectable Command: Not Supported 00:07:36.652 Dataset Management Command: Supported 00:07:36.652 Write Zeroes Command: Supported 00:07:36.652 Set Features Save Field: Supported 00:07:36.652 Reservations: Not Supported 00:07:36.652 Timestamp: Supported 00:07:36.652 Copy: Supported 00:07:36.652 Volatile Write Cache: Present 00:07:36.652 Atomic Write Unit (Normal): 1 00:07:36.652 Atomic Write Unit (PFail): 1 00:07:36.652 Atomic Compare & Write Unit: 1 00:07:36.652 Fused Compare & Write: Not Supported 00:07:36.652 Scatter-Gather List 00:07:36.652 SGL Command Set: Supported 00:07:36.652 SGL Keyed: Not Supported 00:07:36.652 SGL Bit Bucket Descriptor: Not Supported 00:07:36.652 SGL Metadata Pointer: Not Supported 00:07:36.652 Oversized SGL: Not Supported 00:07:36.652 SGL Metadata Address: Not Supported 00:07:36.652 SGL Offset: Not Supported 00:07:36.652 Transport SGL Data Block: Not Supported 00:07:36.652 Replay Protected Memory Block: Not Supported 00:07:36.652 00:07:36.652 Firmware Slot Information 00:07:36.652 ========================= 00:07:36.652 Active slot: 1 00:07:36.652 Slot 1 Firmware Revision: 1.0 00:07:36.652 00:07:36.652 00:07:36.652 Commands Supported and Effects 00:07:36.652 ============================== 00:07:36.652 Admin Commands 00:07:36.652 -------------- 00:07:36.652 Delete I/O Submission Queue (00h): Supported 00:07:36.652 Create I/O Submission Queue (01h): Supported 00:07:36.652 Get Log Page (02h): Supported 00:07:36.652 Delete I/O Completion Queue (04h): Supported 00:07:36.652 Create I/O Completion Queue (05h): Supported 00:07:36.652 Identify (06h): Supported 00:07:36.652 Abort (08h): Supported 00:07:36.652 Set Features (09h): Supported 00:07:36.652 Get Features (0Ah): Supported 00:07:36.652 Asynchronous Event Request (0Ch): Supported 00:07:36.652 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:36.652 Directive Send (19h): Supported 00:07:36.652 Directive Receive (1Ah): Supported 00:07:36.652 Virtualization Management (1Ch): Supported 00:07:36.652 Doorbell Buffer Config (7Ch): Supported 00:07:36.652 Format NVM (80h): Supported LBA-Change 00:07:36.652 I/O Commands 00:07:36.652 ------------ 00:07:36.652 Flush (00h): Supported LBA-Change 00:07:36.652 Write (01h): Supported LBA-Change 00:07:36.652 Read (02h): Supported 00:07:36.652 Compare (05h): Supported 00:07:36.652 Write Zeroes (08h): Supported LBA-Change 00:07:36.652 Dataset Management (09h): Supported LBA-Change 00:07:36.652 Unknown (0Ch): Supported 00:07:36.652 Unknown (12h): Supported 00:07:36.652 Copy (19h): Supported LBA-Change 00:07:36.652 Unknown (1Dh): Supported LBA-Change 00:07:36.652 00:07:36.652 Error Log 00:07:36.652 ========= 00:07:36.652 00:07:36.652 Arbitration 00:07:36.652 =========== 00:07:36.652 Arbitration Burst: no limit 00:07:36.652 00:07:36.652 Power Management 00:07:36.652 ================ 00:07:36.652 Number of Power States: 1 00:07:36.652 Current Power State: Power State #0 00:07:36.652 Power State #0: 00:07:36.652 Max Power: 25.00 W 00:07:36.652 Non-Operational State: Operational 00:07:36.652 Entry Latency: 16 microseconds 00:07:36.652 Exit Latency: 4 microseconds 00:07:36.652 Relative Read Throughput: 0 00:07:36.652 Relative Read Latency: 0 00:07:36.652 Relative Write Throughput: 0 00:07:36.652 Relative Write Latency: 0 00:07:36.652 Idle Power: Not Reported 00:07:36.652 Active Power: Not Reported 00:07:36.652 Non-Operational Permissive Mode: Not Supported 00:07:36.652 00:07:36.652 Health Information 00:07:36.652 ================== 00:07:36.652 Critical Warnings: 00:07:36.652 Available Spare Space: OK 00:07:36.652 Temperature: [2024-11-29 09:25:04.354389] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 76226 terminated unexpected 00:07:36.652 OK 00:07:36.652 Device Reliability: OK 00:07:36.652 Read Only: No 00:07:36.652 Volatile Memory Backup: OK 00:07:36.652 Current Temperature: 323 Kelvin (50 Celsius) 00:07:36.652 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:36.652 Available Spare: 0% 00:07:36.652 Available Spare Threshold: 0% 00:07:36.652 Life Percentage Used: 0% 00:07:36.652 Data Units Read: 1141 00:07:36.652 Data Units Written: 1014 00:07:36.652 Host Read Commands: 59076 00:07:36.652 Host Write Commands: 57968 00:07:36.652 Controller Busy Time: 0 minutes 00:07:36.652 Power Cycles: 0 00:07:36.652 Power On Hours: 0 hours 00:07:36.652 Unsafe Shutdowns: 0 00:07:36.652 Unrecoverable Media Errors: 0 00:07:36.652 Lifetime Error Log Entries: 0 00:07:36.652 Warning Temperature Time: 0 minutes 00:07:36.652 Critical Temperature Time: 0 minutes 00:07:36.652 00:07:36.652 Number of Queues 00:07:36.652 ================ 00:07:36.652 Number of I/O Submission Queues: 64 00:07:36.652 Number of I/O Completion Queues: 64 00:07:36.652 00:07:36.652 ZNS Specific Controller Data 00:07:36.652 ============================ 00:07:36.652 Zone Append Size Limit: 0 00:07:36.652 00:07:36.652 00:07:36.652 Active Namespaces 00:07:36.653 ================= 00:07:36.653 Namespace ID:1 00:07:36.653 Error Recovery Timeout: Unlimited 00:07:36.653 Command Set Identifier: NVM (00h) 00:07:36.653 Deallocate: Supported 00:07:36.653 Deallocated/Unwritten Error: Supported 00:07:36.653 Deallocated Read Value: All 0x00 00:07:36.653 Deallocate in Write Zeroes: Not Supported 00:07:36.653 Deallocated Guard Field: 0xFFFF 00:07:36.653 Flush: Supported 00:07:36.653 Reservation: Not Supported 00:07:36.653 Namespace Sharing Capabilities: Private 00:07:36.653 Size (in LBAs): 1310720 (5GiB) 00:07:36.653 Capacity (in LBAs): 1310720 (5GiB) 00:07:36.653 Utilization (in LBAs): 1310720 (5GiB) 00:07:36.653 Thin Provisioning: Not Supported 00:07:36.653 Per-NS Atomic Units: No 00:07:36.653 Maximum Single Source Range Length: 128 00:07:36.653 Maximum Copy Length: 128 00:07:36.653 Maximum Source Range Count: 128 00:07:36.653 NGUID/EUI64 Never Reused: No 00:07:36.653 Namespace Write Protected: No 00:07:36.653 Number of LBA Formats: 8 00:07:36.653 Current LBA Format: LBA Format #04 00:07:36.653 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:36.653 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:36.653 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:36.653 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:36.653 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:36.653 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:36.653 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:36.653 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:36.653 00:07:36.653 NVM Specific Namespace Data 00:07:36.653 =========================== 00:07:36.653 Logical Block Storage Tag Mask: 0 00:07:36.653 Protection Information Capabilities: 00:07:36.653 16b Guard Protection Information Storage Tag Support: No 00:07:36.653 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:36.653 Storage Tag Check Read Support: No 00:07:36.653 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.653 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.653 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.653 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.653 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.653 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.653 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.653 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.653 ===================================================== 00:07:36.653 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:36.653 ===================================================== 00:07:36.653 Controller Capabilities/Features 00:07:36.653 ================================ 00:07:36.653 Vendor ID: 1b36 00:07:36.653 Subsystem Vendor ID: 1af4 00:07:36.653 Serial Number: 12343 00:07:36.653 Model Number: QEMU NVMe Ctrl 00:07:36.653 Firmware Version: 8.0.0 00:07:36.653 Recommended Arb Burst: 6 00:07:36.653 IEEE OUI Identifier: 00 54 52 00:07:36.653 Multi-path I/O 00:07:36.653 May have multiple subsystem ports: No 00:07:36.653 May have multiple controllers: Yes 00:07:36.653 Associated with SR-IOV VF: No 00:07:36.653 Max Data Transfer Size: 524288 00:07:36.653 Max Number of Namespaces: 256 00:07:36.653 Max Number of I/O Queues: 64 00:07:36.653 NVMe Specification Version (VS): 1.4 00:07:36.653 NVMe Specification Version (Identify): 1.4 00:07:36.653 Maximum Queue Entries: 2048 00:07:36.653 Contiguous Queues Required: Yes 00:07:36.653 Arbitration Mechanisms Supported 00:07:36.653 Weighted Round Robin: Not Supported 00:07:36.653 Vendor Specific: Not Supported 00:07:36.653 Reset Timeout: 7500 ms 00:07:36.653 Doorbell Stride: 4 bytes 00:07:36.653 NVM Subsystem Reset: Not Supported 00:07:36.653 Command Sets Supported 00:07:36.653 NVM Command Set: Supported 00:07:36.653 Boot Partition: Not Supported 00:07:36.653 Memory Page Size Minimum: 4096 bytes 00:07:36.653 Memory Page Size Maximum: 65536 bytes 00:07:36.653 Persistent Memory Region: Not Supported 00:07:36.653 Optional Asynchronous Events Supported 00:07:36.653 Namespace Attribute Notices: Supported 00:07:36.653 Firmware Activation Notices: Not Supported 00:07:36.653 ANA Change Notices: Not Supported 00:07:36.653 PLE Aggregate Log Change Notices: Not Supported 00:07:36.653 LBA Status Info Alert Notices: Not Supported 00:07:36.653 EGE Aggregate Log Change Notices: Not Supported 00:07:36.653 Normal NVM Subsystem Shutdown event: Not Supported 00:07:36.653 Zone Descriptor Change Notices: Not Supported 00:07:36.653 Discovery Log Change Notices: Not Supported 00:07:36.653 Controller Attributes 00:07:36.653 128-bit Host Identifier: Not Supported 00:07:36.653 Non-Operational Permissive Mode: Not Supported 00:07:36.653 NVM Sets: Not Supported 00:07:36.653 Read Recovery Levels: Not Supported 00:07:36.653 Endurance Groups: Supported 00:07:36.653 Predictable Latency Mode: Not Supported 00:07:36.653 Traffic Based Keep ALive: Not Supported 00:07:36.653 Namespace Granularity: Not Supported 00:07:36.653 SQ Associations: Not Supported 00:07:36.653 UUID List: Not Supported 00:07:36.653 Multi-Domain Subsystem: Not Supported 00:07:36.653 Fixed Capacity Management: Not Supported 00:07:36.653 Variable Capacity Management: Not Supported 00:07:36.653 Delete Endurance Group: Not Supported 00:07:36.653 Delete NVM Set: Not Supported 00:07:36.653 Extended LBA Formats Supported: Supported 00:07:36.653 Flexible Data Placement Supported: Supported 00:07:36.653 00:07:36.653 Controller Memory Buffer Support 00:07:36.653 ================================ 00:07:36.653 Supported: No 00:07:36.653 00:07:36.653 Persistent Memory Region Support 00:07:36.653 ================================ 00:07:36.653 Supported: No 00:07:36.653 00:07:36.653 Admin Command Set Attributes 00:07:36.653 ============================ 00:07:36.653 Security Send/Receive: Not Supported 00:07:36.653 Format NVM: Supported 00:07:36.653 Firmware Activate/Download: Not Supported 00:07:36.653 Namespace Management: Supported 00:07:36.653 Device Self-Test: Not Supported 00:07:36.653 Directives: Supported 00:07:36.653 NVMe-MI: Not Supported 00:07:36.653 Virtualization Management: Not Supported 00:07:36.653 Doorbell Buffer Config: Supported 00:07:36.653 Get LBA Status Capability: Not Supported 00:07:36.653 Command & Feature Lockdown Capability: Not Supported 00:07:36.653 Abort Command Limit: 4 00:07:36.653 Async Event Request Limit: 4 00:07:36.653 Number of Firmware Slots: N/A 00:07:36.653 Firmware Slot 1 Read-Only: N/A 00:07:36.653 Firmware Activation Without Reset: N/A 00:07:36.653 Multiple Update Detection Support: N/A 00:07:36.653 Firmware Update Granularity: No Information Provided 00:07:36.653 Per-Namespace SMART Log: Yes 00:07:36.653 Asymmetric Namespace Access Log Page: Not Supported 00:07:36.653 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:36.653 Command Effects Log Page: Supported 00:07:36.653 Get Log Page Extended Data: Supported 00:07:36.653 Telemetry Log Pages: Not Supported 00:07:36.653 Persistent Event Log Pages: Not Supported 00:07:36.653 Supported Log Pages Log Page: May Support 00:07:36.653 Commands Supported & Effects Log Page: Not Supported 00:07:36.653 Feature Identifiers & Effects Log Page:May Support 00:07:36.653 NVMe-MI Commands & Effects Log Page: May Support 00:07:36.653 Data Area 4 for Telemetry Log: Not Supported 00:07:36.653 Error Log Page Entries Supported: 1 00:07:36.653 Keep Alive: Not Supported 00:07:36.653 00:07:36.653 NVM Command Set Attributes 00:07:36.653 ========================== 00:07:36.653 Submission Queue Entry Size 00:07:36.653 Max: 64 00:07:36.653 Min: 64 00:07:36.653 Completion Queue Entry Size 00:07:36.653 Max: 16 00:07:36.653 Min: 16 00:07:36.653 Number of Namespaces: 256 00:07:36.653 Compare Command: Supported 00:07:36.653 Write Uncorrectable Command: Not Supported 00:07:36.653 Dataset Management Command: Supported 00:07:36.653 Write Zeroes Command: Supported 00:07:36.653 Set Features Save Field: Supported 00:07:36.653 Reservations: Not Supported 00:07:36.653 Timestamp: Supported 00:07:36.653 Copy: Supported 00:07:36.653 Volatile Write Cache: Present 00:07:36.653 Atomic Write Unit (Normal): 1 00:07:36.653 Atomic Write Unit (PFail): 1 00:07:36.653 Atomic Compare & Write Unit: 1 00:07:36.653 Fused Compare & Write: Not Supported 00:07:36.653 Scatter-Gather List 00:07:36.653 SGL Command Set: Supported 00:07:36.653 SGL Keyed: Not Supported 00:07:36.653 SGL Bit Bucket Descriptor: Not Supported 00:07:36.653 SGL Metadata Pointer: Not Supported 00:07:36.653 Oversized SGL: Not Supported 00:07:36.653 SGL Metadata Address: Not Supported 00:07:36.653 SGL Offset: Not Supported 00:07:36.653 Transport SGL Data Block: Not Supported 00:07:36.653 Replay Protected Memory Block: Not Supported 00:07:36.654 00:07:36.654 Firmware Slot Information 00:07:36.654 ========================= 00:07:36.654 Active slot: 1 00:07:36.654 Slot 1 Firmware Revision: 1.0 00:07:36.654 00:07:36.654 00:07:36.654 Commands Supported and Effects 00:07:36.654 ============================== 00:07:36.654 Admin Commands 00:07:36.654 -------------- 00:07:36.654 Delete I/O Submission Queue (00h): Supported 00:07:36.654 Create I/O Submission Queue (01h): Supported 00:07:36.654 Get Log Page (02h): Supported 00:07:36.654 Delete I/O Completion Queue (04h): Supported 00:07:36.654 Create I/O Completion Queue (05h): Supported 00:07:36.654 Identify (06h): Supported 00:07:36.654 Abort (08h): Supported 00:07:36.654 Set Features (09h): Supported 00:07:36.654 Get Features (0Ah): Supported 00:07:36.654 Asynchronous Event Request (0Ch): Supported 00:07:36.654 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:36.654 Directive Send (19h): Supported 00:07:36.654 Directive Receive (1Ah): Supported 00:07:36.654 Virtualization Management (1Ch): Supported 00:07:36.654 Doorbell Buffer Config (7Ch): Supported 00:07:36.654 Format NVM (80h): Supported LBA-Change 00:07:36.654 I/O Commands 00:07:36.654 ------------ 00:07:36.654 Flush (00h): Supported LBA-Change 00:07:36.654 Write (01h): Supported LBA-Change 00:07:36.654 Read (02h): Supported 00:07:36.654 Compare (05h): Supported 00:07:36.654 Write Zeroes (08h): Supported LBA-Change 00:07:36.654 Dataset Management (09h): Supported LBA-Change 00:07:36.654 Unknown (0Ch): Supported 00:07:36.654 Unknown (12h): Supported 00:07:36.654 Copy (19h): Supported LBA-Change 00:07:36.654 Unknown (1Dh): Supported LBA-Change 00:07:36.654 00:07:36.654 Error Log 00:07:36.654 ========= 00:07:36.654 00:07:36.654 Arbitration 00:07:36.654 =========== 00:07:36.654 Arbitration Burst: no limit 00:07:36.654 00:07:36.654 Power Management 00:07:36.654 ================ 00:07:36.654 Number of Power States: 1 00:07:36.654 Current Power State: Power State #0 00:07:36.654 Power State #0: 00:07:36.654 Max Power: 25.00 W 00:07:36.654 Non-Operational State: Operational 00:07:36.654 Entry Latency: 16 microseconds 00:07:36.654 Exit Latency: 4 microseconds 00:07:36.654 Relative Read Throughput: 0 00:07:36.654 Relative Read Latency: 0 00:07:36.654 Relative Write Throughput: 0 00:07:36.654 Relative Write Latency: 0 00:07:36.654 Idle Power: Not Reported 00:07:36.654 Active Power: Not Reported 00:07:36.654 Non-Operational Permissive Mode: Not Supported 00:07:36.654 00:07:36.654 Health Information 00:07:36.654 ================== 00:07:36.654 Critical Warnings: 00:07:36.654 Available Spare Space: OK 00:07:36.654 Temperature: OK 00:07:36.654 Device Reliability: OK 00:07:36.654 Read Only: No 00:07:36.654 Volatile Memory Backup: OK 00:07:36.654 Current Temperature: 323 Kelvin (50 Celsius) 00:07:36.654 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:36.654 Available Spare: 0% 00:07:36.654 Available Spare Threshold: 0% 00:07:36.654 Life Percentage Used: 0% 00:07:36.654 Data Units Read: 1088 00:07:36.654 Data Units Written: 1017 00:07:36.654 Host Read Commands: 41940 00:07:36.654 Host Write Commands: 41363 00:07:36.654 Controller Busy Time: 0 minutes 00:07:36.654 Power Cycles: 0 00:07:36.654 Power On Hours: 0 hours 00:07:36.654 Unsafe Shutdowns: 0 00:07:36.654 Unrecoverable Media Errors: 0 00:07:36.654 Lifetime Error Log Entries: 0 00:07:36.654 Warning Temperature Time: 0 minutes 00:07:36.654 Critical Temperature Time: 0 minutes 00:07:36.654 00:07:36.654 Number of Queues 00:07:36.654 ================ 00:07:36.654 Number of I/O Submission Queues: 64 00:07:36.654 Number of I/O Completion Queues: 64 00:07:36.654 00:07:36.654 ZNS Specific Controller Data 00:07:36.654 ============================ 00:07:36.654 Zone Append Size Limit: 0 00:07:36.654 00:07:36.654 00:07:36.654 Active Namespaces 00:07:36.654 ================= 00:07:36.654 Namespace ID:1 00:07:36.654 Error Recovery Timeout: Unlimited 00:07:36.654 Command Set Identifier: NVM (00h) 00:07:36.654 Deallocate: Supported 00:07:36.654 Deallocated/Unwritten Error: Supported 00:07:36.654 Deallocated Read Value: All 0x00 00:07:36.654 Deallocate in Write Zeroes: Not Supported 00:07:36.654 Deallocated Guard Field: 0xFFFF 00:07:36.654 Flush: Supported 00:07:36.654 Reservation: Not Supported 00:07:36.654 Namespace Sharing Capabilities: Multiple Controllers 00:07:36.654 Size (in LBAs): 262144 (1GiB) 00:07:36.654 Capacity (in LBAs): 262144 (1GiB) 00:07:36.654 Utilization (in LBAs): 262144 (1GiB) 00:07:36.654 Thin Provisioning: Not Supported 00:07:36.654 Per-NS Atomic Units: No 00:07:36.654 Maximum Single Source Range Length: 128 00:07:36.654 Maximum Copy Length: 128 00:07:36.654 Maximum Source Range Count: 128 00:07:36.654 NGUID/EUI64 Never Reused: No 00:07:36.654 Namespace Write Protected: No 00:07:36.654 Endurance group ID: 1 00:07:36.654 Number of LBA Formats: 8 00:07:36.654 Current LBA Format: LBA Format #04 00:07:36.654 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:36.654 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:36.654 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:36.654 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:36.654 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:36.654 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:36.654 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:36.654 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:36.654 00:07:36.654 Get Feature FDP: 00:07:36.654 ================ 00:07:36.654 Enabled: Yes 00:07:36.654 FDP configuration index: 0 00:07:36.654 00:07:36.654 FDP configurations log page 00:07:36.654 =========================== 00:07:36.654 Number of FDP configurations: 1 00:07:36.654 Version: 0 00:07:36.654 Size: 112 00:07:36.654 FDP Configuration Descriptor: 0 00:07:36.654 Descriptor Size: 96 00:07:36.654 Reclaim Group Identifier format: 2 00:07:36.654 FDP Volatile Write Cache: Not Present 00:07:36.654 FDP Configuration: Valid 00:07:36.654 Vendor Specific Size: 0 00:07:36.654 Number of Reclaim Groups: 2 00:07:36.654 Number of Recalim Unit Handles: 8 00:07:36.654 Max Placement Identifiers: 128 00:07:36.654 Number of Namespaces Suppprted: 256 00:07:36.654 Reclaim unit Nominal Size: 6000000 bytes 00:07:36.654 Estimated Reclaim Unit Time Limit: Not Reported 00:07:36.654 RUH Desc #000: RUH Type: Initially Isolated 00:07:36.654 RUH Desc #001: RUH Type: Initially Isolated 00:07:36.654 RUH Desc #002: RUH Type: Initially Isolated 00:07:36.654 RUH Desc #003: RUH Type: Initially Isolated 00:07:36.654 RUH Desc #004: RUH Type: Initially Isolated 00:07:36.654 RUH Desc #005: RUH Type: Initially Isolated 00:07:36.654 RUH Desc #006: RUH Type: Initially Isolated 00:07:36.654 RUH Desc #007: RUH Type: Initially Isolated 00:07:36.654 00:07:36.654 FDP reclaim unit handle usage log page 00:07:36.654 ====================================== 00:07:36.654 Number of Reclaim Unit Handles: 8 00:07:36.654 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:36.654 RUH Usage Desc #001: RUH Attributes: Unused 00:07:36.654 RUH Usage Desc #002: RUH Attributes: Unused 00:07:36.654 RUH Usage Desc #003: RUH Attributes: Unused 00:07:36.654 RUH Usage Desc #004: RUH Attributes: Unused 00:07:36.654 RUH Usage Desc #005: RUH Attributes: Unused 00:07:36.654 RUH Usage Desc #006: RUH Attributes: Unused 00:07:36.654 RUH Usage Desc #007: RUH Attributes: Unused 00:07:36.654 00:07:36.654 FDP statistics log page 00:07:36.654 ======================= 00:07:36.654 Host bytes with metadata written: 621584384 00:07:36.654 M[2024-11-29 09:25:04.356558] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 76226 terminated unexpected 00:07:36.654 edia bytes with metadata written: 621666304 00:07:36.654 Media bytes erased: 0 00:07:36.654 00:07:36.654 FDP events log page 00:07:36.654 =================== 00:07:36.654 Number of FDP events: 0 00:07:36.654 00:07:36.654 NVM Specific Namespace Data 00:07:36.654 =========================== 00:07:36.654 Logical Block Storage Tag Mask: 0 00:07:36.654 Protection Information Capabilities: 00:07:36.654 16b Guard Protection Information Storage Tag Support: No 00:07:36.654 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:36.654 Storage Tag Check Read Support: No 00:07:36.654 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.654 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.654 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.654 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.654 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.654 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.654 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.655 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.655 ===================================================== 00:07:36.655 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:36.655 ===================================================== 00:07:36.655 Controller Capabilities/Features 00:07:36.655 ================================ 00:07:36.655 Vendor ID: 1b36 00:07:36.655 Subsystem Vendor ID: 1af4 00:07:36.655 Serial Number: 12342 00:07:36.655 Model Number: QEMU NVMe Ctrl 00:07:36.655 Firmware Version: 8.0.0 00:07:36.655 Recommended Arb Burst: 6 00:07:36.655 IEEE OUI Identifier: 00 54 52 00:07:36.655 Multi-path I/O 00:07:36.655 May have multiple subsystem ports: No 00:07:36.655 May have multiple controllers: No 00:07:36.655 Associated with SR-IOV VF: No 00:07:36.655 Max Data Transfer Size: 524288 00:07:36.655 Max Number of Namespaces: 256 00:07:36.655 Max Number of I/O Queues: 64 00:07:36.655 NVMe Specification Version (VS): 1.4 00:07:36.655 NVMe Specification Version (Identify): 1.4 00:07:36.655 Maximum Queue Entries: 2048 00:07:36.655 Contiguous Queues Required: Yes 00:07:36.655 Arbitration Mechanisms Supported 00:07:36.655 Weighted Round Robin: Not Supported 00:07:36.655 Vendor Specific: Not Supported 00:07:36.655 Reset Timeout: 7500 ms 00:07:36.655 Doorbell Stride: 4 bytes 00:07:36.655 NVM Subsystem Reset: Not Supported 00:07:36.655 Command Sets Supported 00:07:36.655 NVM Command Set: Supported 00:07:36.655 Boot Partition: Not Supported 00:07:36.655 Memory Page Size Minimum: 4096 bytes 00:07:36.655 Memory Page Size Maximum: 65536 bytes 00:07:36.655 Persistent Memory Region: Not Supported 00:07:36.655 Optional Asynchronous Events Supported 00:07:36.655 Namespace Attribute Notices: Supported 00:07:36.655 Firmware Activation Notices: Not Supported 00:07:36.655 ANA Change Notices: Not Supported 00:07:36.655 PLE Aggregate Log Change Notices: Not Supported 00:07:36.655 LBA Status Info Alert Notices: Not Supported 00:07:36.655 EGE Aggregate Log Change Notices: Not Supported 00:07:36.655 Normal NVM Subsystem Shutdown event: Not Supported 00:07:36.655 Zone Descriptor Change Notices: Not Supported 00:07:36.655 Discovery Log Change Notices: Not Supported 00:07:36.655 Controller Attributes 00:07:36.655 128-bit Host Identifier: Not Supported 00:07:36.655 Non-Operational Permissive Mode: Not Supported 00:07:36.655 NVM Sets: Not Supported 00:07:36.655 Read Recovery Levels: Not Supported 00:07:36.655 Endurance Groups: Not Supported 00:07:36.655 Predictable Latency Mode: Not Supported 00:07:36.655 Traffic Based Keep ALive: Not Supported 00:07:36.655 Namespace Granularity: Not Supported 00:07:36.655 SQ Associations: Not Supported 00:07:36.655 UUID List: Not Supported 00:07:36.655 Multi-Domain Subsystem: Not Supported 00:07:36.655 Fixed Capacity Management: Not Supported 00:07:36.655 Variable Capacity Management: Not Supported 00:07:36.655 Delete Endurance Group: Not Supported 00:07:36.655 Delete NVM Set: Not Supported 00:07:36.655 Extended LBA Formats Supported: Supported 00:07:36.655 Flexible Data Placement Supported: Not Supported 00:07:36.655 00:07:36.655 Controller Memory Buffer Support 00:07:36.655 ================================ 00:07:36.655 Supported: No 00:07:36.655 00:07:36.655 Persistent Memory Region Support 00:07:36.655 ================================ 00:07:36.655 Supported: No 00:07:36.655 00:07:36.655 Admin Command Set Attributes 00:07:36.655 ============================ 00:07:36.655 Security Send/Receive: Not Supported 00:07:36.655 Format NVM: Supported 00:07:36.655 Firmware Activate/Download: Not Supported 00:07:36.655 Namespace Management: Supported 00:07:36.655 Device Self-Test: Not Supported 00:07:36.655 Directives: Supported 00:07:36.655 NVMe-MI: Not Supported 00:07:36.655 Virtualization Management: Not Supported 00:07:36.655 Doorbell Buffer Config: Supported 00:07:36.655 Get LBA Status Capability: Not Supported 00:07:36.655 Command & Feature Lockdown Capability: Not Supported 00:07:36.655 Abort Command Limit: 4 00:07:36.655 Async Event Request Limit: 4 00:07:36.655 Number of Firmware Slots: N/A 00:07:36.655 Firmware Slot 1 Read-Only: N/A 00:07:36.655 Firmware Activation Without Reset: N/A 00:07:36.655 Multiple Update Detection Support: N/A 00:07:36.655 Firmware Update Granularity: No Information Provided 00:07:36.655 Per-Namespace SMART Log: Yes 00:07:36.655 Asymmetric Namespace Access Log Page: Not Supported 00:07:36.655 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:36.655 Command Effects Log Page: Supported 00:07:36.655 Get Log Page Extended Data: Supported 00:07:36.655 Telemetry Log Pages: Not Supported 00:07:36.655 Persistent Event Log Pages: Not Supported 00:07:36.655 Supported Log Pages Log Page: May Support 00:07:36.655 Commands Supported & Effects Log Page: Not Supported 00:07:36.655 Feature Identifiers & Effects Log Page:May Support 00:07:36.655 NVMe-MI Commands & Effects Log Page: May Support 00:07:36.655 Data Area 4 for Telemetry Log: Not Supported 00:07:36.655 Error Log Page Entries Supported: 1 00:07:36.655 Keep Alive: Not Supported 00:07:36.655 00:07:36.655 NVM Command Set Attributes 00:07:36.655 ========================== 00:07:36.655 Submission Queue Entry Size 00:07:36.655 Max: 64 00:07:36.655 Min: 64 00:07:36.655 Completion Queue Entry Size 00:07:36.655 Max: 16 00:07:36.655 Min: 16 00:07:36.655 Number of Namespaces: 256 00:07:36.655 Compare Command: Supported 00:07:36.655 Write Uncorrectable Command: Not Supported 00:07:36.655 Dataset Management Command: Supported 00:07:36.655 Write Zeroes Command: Supported 00:07:36.655 Set Features Save Field: Supported 00:07:36.655 Reservations: Not Supported 00:07:36.655 Timestamp: Supported 00:07:36.655 Copy: Supported 00:07:36.655 Volatile Write Cache: Present 00:07:36.655 Atomic Write Unit (Normal): 1 00:07:36.655 Atomic Write Unit (PFail): 1 00:07:36.655 Atomic Compare & Write Unit: 1 00:07:36.655 Fused Compare & Write: Not Supported 00:07:36.655 Scatter-Gather List 00:07:36.655 SGL Command Set: Supported 00:07:36.655 SGL Keyed: Not Supported 00:07:36.655 SGL Bit Bucket Descriptor: Not Supported 00:07:36.655 SGL Metadata Pointer: Not Supported 00:07:36.655 Oversized SGL: Not Supported 00:07:36.655 SGL Metadata Address: Not Supported 00:07:36.655 SGL Offset: Not Supported 00:07:36.655 Transport SGL Data Block: Not Supported 00:07:36.655 Replay Protected Memory Block: Not Supported 00:07:36.655 00:07:36.655 Firmware Slot Information 00:07:36.655 ========================= 00:07:36.655 Active slot: 1 00:07:36.655 Slot 1 Firmware Revision: 1.0 00:07:36.655 00:07:36.655 00:07:36.655 Commands Supported and Effects 00:07:36.655 ============================== 00:07:36.655 Admin Commands 00:07:36.655 -------------- 00:07:36.655 Delete I/O Submission Queue (00h): Supported 00:07:36.655 Create I/O Submission Queue (01h): Supported 00:07:36.655 Get Log Page (02h): Supported 00:07:36.655 Delete I/O Completion Queue (04h): Supported 00:07:36.655 Create I/O Completion Queue (05h): Supported 00:07:36.655 Identify (06h): Supported 00:07:36.655 Abort (08h): Supported 00:07:36.655 Set Features (09h): Supported 00:07:36.655 Get Features (0Ah): Supported 00:07:36.655 Asynchronous Event Request (0Ch): Supported 00:07:36.656 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:36.656 Directive Send (19h): Supported 00:07:36.656 Directive Receive (1Ah): Supported 00:07:36.656 Virtualization Management (1Ch): Supported 00:07:36.656 Doorbell Buffer Config (7Ch): Supported 00:07:36.656 Format NVM (80h): Supported LBA-Change 00:07:36.656 I/O Commands 00:07:36.656 ------------ 00:07:36.656 Flush (00h): Supported LBA-Change 00:07:36.656 Write (01h): Supported LBA-Change 00:07:36.656 Read (02h): Supported 00:07:36.656 Compare (05h): Supported 00:07:36.656 Write Zeroes (08h): Supported LBA-Change 00:07:36.656 Dataset Management (09h): Supported LBA-Change 00:07:36.656 Unknown (0Ch): Supported 00:07:36.656 Unknown (12h): Supported 00:07:36.656 Copy (19h): Supported LBA-Change 00:07:36.656 Unknown (1Dh): Supported LBA-Change 00:07:36.656 00:07:36.656 Error Log 00:07:36.656 ========= 00:07:36.656 00:07:36.656 Arbitration 00:07:36.656 =========== 00:07:36.656 Arbitration Burst: no limit 00:07:36.656 00:07:36.656 Power Management 00:07:36.656 ================ 00:07:36.656 Number of Power States: 1 00:07:36.656 Current Power State: Power State #0 00:07:36.656 Power State #0: 00:07:36.656 Max Power: 25.00 W 00:07:36.656 Non-Operational State: Operational 00:07:36.656 Entry Latency: 16 microseconds 00:07:36.656 Exit Latency: 4 microseconds 00:07:36.656 Relative Read Throughput: 0 00:07:36.656 Relative Read Latency: 0 00:07:36.656 Relative Write Throughput: 0 00:07:36.656 Relative Write Latency: 0 00:07:36.656 Idle Power: Not Reported 00:07:36.656 Active Power: Not Reported 00:07:36.656 Non-Operational Permissive Mode: Not Supported 00:07:36.656 00:07:36.656 Health Information 00:07:36.656 ================== 00:07:36.656 Critical Warnings: 00:07:36.656 Available Spare Space: OK 00:07:36.656 Temperature: OK 00:07:36.656 Device Reliability: OK 00:07:36.656 Read Only: No 00:07:36.656 Volatile Memory Backup: OK 00:07:36.656 Current Temperature: 323 Kelvin (50 Celsius) 00:07:36.656 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:36.656 Available Spare: 0% 00:07:36.656 Available Spare Threshold: 0% 00:07:36.656 Life Percentage Used: 0% 00:07:36.656 Data Units Read: 2389 00:07:36.656 Data Units Written: 2176 00:07:36.656 Host Read Commands: 118804 00:07:36.656 Host Write Commands: 117073 00:07:36.656 Controller Busy Time: 0 minutes 00:07:36.656 Power Cycles: 0 00:07:36.656 Power On Hours: 0 hours 00:07:36.656 Unsafe Shutdowns: 0 00:07:36.656 Unrecoverable Media Errors: 0 00:07:36.656 Lifetime Error Log Entries: 0 00:07:36.656 Warning Temperature Time: 0 minutes 00:07:36.656 Critical Temperature Time: 0 minutes 00:07:36.656 00:07:36.656 Number of Queues 00:07:36.656 ================ 00:07:36.656 Number of I/O Submission Queues: 64 00:07:36.656 Number of I/O Completion Queues: 64 00:07:36.656 00:07:36.656 ZNS Specific Controller Data 00:07:36.656 ============================ 00:07:36.656 Zone Append Size Limit: 0 00:07:36.656 00:07:36.656 00:07:36.656 Active Namespaces 00:07:36.656 ================= 00:07:36.656 Namespace ID:1 00:07:36.656 Error Recovery Timeout: Unlimited 00:07:36.656 Command Set Identifier: NVM (00h) 00:07:36.656 Deallocate: Supported 00:07:36.656 Deallocated/Unwritten Error: Supported 00:07:36.656 Deallocated Read Value: All 0x00 00:07:36.656 Deallocate in Write Zeroes: Not Supported 00:07:36.656 Deallocated Guard Field: 0xFFFF 00:07:36.656 Flush: Supported 00:07:36.656 Reservation: Not Supported 00:07:36.656 Namespace Sharing Capabilities: Private 00:07:36.656 Size (in LBAs): 1048576 (4GiB) 00:07:36.656 Capacity (in LBAs): 1048576 (4GiB) 00:07:36.656 Utilization (in LBAs): 1048576 (4GiB) 00:07:36.656 Thin Provisioning: Not Supported 00:07:36.656 Per-NS Atomic Units: No 00:07:36.656 Maximum Single Source Range Length: 128 00:07:36.656 Maximum Copy Length: 128 00:07:36.656 Maximum Source Range Count: 128 00:07:36.656 NGUID/EUI64 Never Reused: No 00:07:36.656 Namespace Write Protected: No 00:07:36.656 Number of LBA Formats: 8 00:07:36.656 Current LBA Format: LBA Format #04 00:07:36.656 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:36.656 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:36.656 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:36.656 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:36.656 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:36.656 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:36.656 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:36.656 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:36.656 00:07:36.656 NVM Specific Namespace Data 00:07:36.656 =========================== 00:07:36.656 Logical Block Storage Tag Mask: 0 00:07:36.656 Protection Information Capabilities: 00:07:36.656 16b Guard Protection Information Storage Tag Support: No 00:07:36.656 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:36.656 Storage Tag Check Read Support: No 00:07:36.656 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.656 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.656 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.656 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.656 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.656 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.656 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.656 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.656 Namespace ID:2 00:07:36.656 Error Recovery Timeout: Unlimited 00:07:36.656 Command Set Identifier: NVM (00h) 00:07:36.656 Deallocate: Supported 00:07:36.656 Deallocated/Unwritten Error: Supported 00:07:36.656 Deallocated Read Value: All 0x00 00:07:36.656 Deallocate in Write Zeroes: Not Supported 00:07:36.656 Deallocated Guard Field: 0xFFFF 00:07:36.656 Flush: Supported 00:07:36.656 Reservation: Not Supported 00:07:36.656 Namespace Sharing Capabilities: Private 00:07:36.656 Size (in LBAs): 1048576 (4GiB) 00:07:36.656 Capacity (in LBAs): 1048576 (4GiB) 00:07:36.656 Utilization (in LBAs): 1048576 (4GiB) 00:07:36.656 Thin Provisioning: Not Supported 00:07:36.656 Per-NS Atomic Units: No 00:07:36.656 Maximum Single Source Range Length: 128 00:07:36.656 Maximum Copy Length: 128 00:07:36.656 Maximum Source Range Count: 128 00:07:36.656 NGUID/EUI64 Never Reused: No 00:07:36.656 Namespace Write Protected: No 00:07:36.656 Number of LBA Formats: 8 00:07:36.656 Current LBA Format: LBA Format #04 00:07:36.656 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:36.656 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:36.656 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:36.656 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:36.656 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:36.656 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:36.656 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:36.656 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:36.656 00:07:36.656 NVM Specific Namespace Data 00:07:36.656 =========================== 00:07:36.656 Logical Block Storage Tag Mask: 0 00:07:36.656 Protection Information Capabilities: 00:07:36.656 16b Guard Protection Information Storage Tag Support: No 00:07:36.656 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:36.656 Storage Tag Check Read Support: No 00:07:36.656 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.656 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.656 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.656 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.656 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.656 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.656 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.656 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.656 Namespace ID:3 00:07:36.656 Error Recovery Timeout: Unlimited 00:07:36.656 Command Set Identifier: NVM (00h) 00:07:36.656 Deallocate: Supported 00:07:36.656 Deallocated/Unwritten Error: Supported 00:07:36.656 Deallocated Read Value: All 0x00 00:07:36.656 Deallocate in Write Zeroes: Not Supported 00:07:36.656 Deallocated Guard Field: 0xFFFF 00:07:36.656 Flush: Supported 00:07:36.656 Reservation: Not Supported 00:07:36.656 Namespace Sharing Capabilities: Private 00:07:36.656 Size (in LBAs): 1048576 (4GiB) 00:07:36.918 Capacity (in LBAs): 1048576 (4GiB) 00:07:36.918 Utilization (in LBAs): 1048576 (4GiB) 00:07:36.918 Thin Provisioning: Not Supported 00:07:36.918 Per-NS Atomic Units: No 00:07:36.918 Maximum Single Source Range Length: 128 00:07:36.918 Maximum Copy Length: 128 00:07:36.918 Maximum Source Range Count: 128 00:07:36.918 NGUID/EUI64 Never Reused: No 00:07:36.918 Namespace Write Protected: No 00:07:36.918 Number of LBA Formats: 8 00:07:36.918 Current LBA Format: LBA Format #04 00:07:36.918 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:36.918 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:36.918 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:36.918 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:36.918 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:36.918 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:36.918 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:36.918 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:36.918 00:07:36.918 NVM Specific Namespace Data 00:07:36.918 =========================== 00:07:36.918 Logical Block Storage Tag Mask: 0 00:07:36.918 Protection Information Capabilities: 00:07:36.918 16b Guard Protection Information Storage Tag Support: No 00:07:36.918 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:36.918 Storage Tag Check Read Support: No 00:07:36.918 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.918 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.918 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.918 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.918 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.918 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.918 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.918 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.918 09:25:04 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:36.918 09:25:04 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:36.918 ===================================================== 00:07:36.918 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:36.918 ===================================================== 00:07:36.918 Controller Capabilities/Features 00:07:36.918 ================================ 00:07:36.918 Vendor ID: 1b36 00:07:36.918 Subsystem Vendor ID: 1af4 00:07:36.918 Serial Number: 12340 00:07:36.918 Model Number: QEMU NVMe Ctrl 00:07:36.918 Firmware Version: 8.0.0 00:07:36.918 Recommended Arb Burst: 6 00:07:36.918 IEEE OUI Identifier: 00 54 52 00:07:36.918 Multi-path I/O 00:07:36.918 May have multiple subsystem ports: No 00:07:36.918 May have multiple controllers: No 00:07:36.918 Associated with SR-IOV VF: No 00:07:36.918 Max Data Transfer Size: 524288 00:07:36.918 Max Number of Namespaces: 256 00:07:36.918 Max Number of I/O Queues: 64 00:07:36.918 NVMe Specification Version (VS): 1.4 00:07:36.918 NVMe Specification Version (Identify): 1.4 00:07:36.918 Maximum Queue Entries: 2048 00:07:36.918 Contiguous Queues Required: Yes 00:07:36.918 Arbitration Mechanisms Supported 00:07:36.918 Weighted Round Robin: Not Supported 00:07:36.918 Vendor Specific: Not Supported 00:07:36.918 Reset Timeout: 7500 ms 00:07:36.918 Doorbell Stride: 4 bytes 00:07:36.918 NVM Subsystem Reset: Not Supported 00:07:36.918 Command Sets Supported 00:07:36.918 NVM Command Set: Supported 00:07:36.918 Boot Partition: Not Supported 00:07:36.918 Memory Page Size Minimum: 4096 bytes 00:07:36.918 Memory Page Size Maximum: 65536 bytes 00:07:36.918 Persistent Memory Region: Not Supported 00:07:36.918 Optional Asynchronous Events Supported 00:07:36.918 Namespace Attribute Notices: Supported 00:07:36.918 Firmware Activation Notices: Not Supported 00:07:36.918 ANA Change Notices: Not Supported 00:07:36.918 PLE Aggregate Log Change Notices: Not Supported 00:07:36.918 LBA Status Info Alert Notices: Not Supported 00:07:36.918 EGE Aggregate Log Change Notices: Not Supported 00:07:36.918 Normal NVM Subsystem Shutdown event: Not Supported 00:07:36.918 Zone Descriptor Change Notices: Not Supported 00:07:36.918 Discovery Log Change Notices: Not Supported 00:07:36.918 Controller Attributes 00:07:36.918 128-bit Host Identifier: Not Supported 00:07:36.918 Non-Operational Permissive Mode: Not Supported 00:07:36.918 NVM Sets: Not Supported 00:07:36.918 Read Recovery Levels: Not Supported 00:07:36.918 Endurance Groups: Not Supported 00:07:36.918 Predictable Latency Mode: Not Supported 00:07:36.918 Traffic Based Keep ALive: Not Supported 00:07:36.918 Namespace Granularity: Not Supported 00:07:36.918 SQ Associations: Not Supported 00:07:36.918 UUID List: Not Supported 00:07:36.918 Multi-Domain Subsystem: Not Supported 00:07:36.918 Fixed Capacity Management: Not Supported 00:07:36.918 Variable Capacity Management: Not Supported 00:07:36.918 Delete Endurance Group: Not Supported 00:07:36.918 Delete NVM Set: Not Supported 00:07:36.918 Extended LBA Formats Supported: Supported 00:07:36.918 Flexible Data Placement Supported: Not Supported 00:07:36.918 00:07:36.918 Controller Memory Buffer Support 00:07:36.918 ================================ 00:07:36.918 Supported: No 00:07:36.918 00:07:36.918 Persistent Memory Region Support 00:07:36.918 ================================ 00:07:36.918 Supported: No 00:07:36.918 00:07:36.918 Admin Command Set Attributes 00:07:36.918 ============================ 00:07:36.918 Security Send/Receive: Not Supported 00:07:36.918 Format NVM: Supported 00:07:36.918 Firmware Activate/Download: Not Supported 00:07:36.918 Namespace Management: Supported 00:07:36.918 Device Self-Test: Not Supported 00:07:36.918 Directives: Supported 00:07:36.918 NVMe-MI: Not Supported 00:07:36.918 Virtualization Management: Not Supported 00:07:36.918 Doorbell Buffer Config: Supported 00:07:36.918 Get LBA Status Capability: Not Supported 00:07:36.918 Command & Feature Lockdown Capability: Not Supported 00:07:36.918 Abort Command Limit: 4 00:07:36.918 Async Event Request Limit: 4 00:07:36.918 Number of Firmware Slots: N/A 00:07:36.918 Firmware Slot 1 Read-Only: N/A 00:07:36.918 Firmware Activation Without Reset: N/A 00:07:36.918 Multiple Update Detection Support: N/A 00:07:36.918 Firmware Update Granularity: No Information Provided 00:07:36.918 Per-Namespace SMART Log: Yes 00:07:36.918 Asymmetric Namespace Access Log Page: Not Supported 00:07:36.918 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:36.918 Command Effects Log Page: Supported 00:07:36.918 Get Log Page Extended Data: Supported 00:07:36.918 Telemetry Log Pages: Not Supported 00:07:36.918 Persistent Event Log Pages: Not Supported 00:07:36.918 Supported Log Pages Log Page: May Support 00:07:36.918 Commands Supported & Effects Log Page: Not Supported 00:07:36.918 Feature Identifiers & Effects Log Page:May Support 00:07:36.918 NVMe-MI Commands & Effects Log Page: May Support 00:07:36.918 Data Area 4 for Telemetry Log: Not Supported 00:07:36.918 Error Log Page Entries Supported: 1 00:07:36.918 Keep Alive: Not Supported 00:07:36.918 00:07:36.918 NVM Command Set Attributes 00:07:36.919 ========================== 00:07:36.919 Submission Queue Entry Size 00:07:36.919 Max: 64 00:07:36.919 Min: 64 00:07:36.919 Completion Queue Entry Size 00:07:36.919 Max: 16 00:07:36.919 Min: 16 00:07:36.919 Number of Namespaces: 256 00:07:36.919 Compare Command: Supported 00:07:36.919 Write Uncorrectable Command: Not Supported 00:07:36.919 Dataset Management Command: Supported 00:07:36.919 Write Zeroes Command: Supported 00:07:36.919 Set Features Save Field: Supported 00:07:36.919 Reservations: Not Supported 00:07:36.919 Timestamp: Supported 00:07:36.919 Copy: Supported 00:07:36.919 Volatile Write Cache: Present 00:07:36.919 Atomic Write Unit (Normal): 1 00:07:36.919 Atomic Write Unit (PFail): 1 00:07:36.919 Atomic Compare & Write Unit: 1 00:07:36.919 Fused Compare & Write: Not Supported 00:07:36.919 Scatter-Gather List 00:07:36.919 SGL Command Set: Supported 00:07:36.919 SGL Keyed: Not Supported 00:07:36.919 SGL Bit Bucket Descriptor: Not Supported 00:07:36.919 SGL Metadata Pointer: Not Supported 00:07:36.919 Oversized SGL: Not Supported 00:07:36.919 SGL Metadata Address: Not Supported 00:07:36.919 SGL Offset: Not Supported 00:07:36.919 Transport SGL Data Block: Not Supported 00:07:36.919 Replay Protected Memory Block: Not Supported 00:07:36.919 00:07:36.919 Firmware Slot Information 00:07:36.919 ========================= 00:07:36.919 Active slot: 1 00:07:36.919 Slot 1 Firmware Revision: 1.0 00:07:36.919 00:07:36.919 00:07:36.919 Commands Supported and Effects 00:07:36.919 ============================== 00:07:36.919 Admin Commands 00:07:36.919 -------------- 00:07:36.919 Delete I/O Submission Queue (00h): Supported 00:07:36.919 Create I/O Submission Queue (01h): Supported 00:07:36.919 Get Log Page (02h): Supported 00:07:36.919 Delete I/O Completion Queue (04h): Supported 00:07:36.919 Create I/O Completion Queue (05h): Supported 00:07:36.919 Identify (06h): Supported 00:07:36.919 Abort (08h): Supported 00:07:36.919 Set Features (09h): Supported 00:07:36.919 Get Features (0Ah): Supported 00:07:36.919 Asynchronous Event Request (0Ch): Supported 00:07:36.919 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:36.919 Directive Send (19h): Supported 00:07:36.919 Directive Receive (1Ah): Supported 00:07:36.919 Virtualization Management (1Ch): Supported 00:07:36.919 Doorbell Buffer Config (7Ch): Supported 00:07:36.919 Format NVM (80h): Supported LBA-Change 00:07:36.919 I/O Commands 00:07:36.919 ------------ 00:07:36.919 Flush (00h): Supported LBA-Change 00:07:36.919 Write (01h): Supported LBA-Change 00:07:36.919 Read (02h): Supported 00:07:36.919 Compare (05h): Supported 00:07:36.919 Write Zeroes (08h): Supported LBA-Change 00:07:36.919 Dataset Management (09h): Supported LBA-Change 00:07:36.919 Unknown (0Ch): Supported 00:07:36.919 Unknown (12h): Supported 00:07:36.919 Copy (19h): Supported LBA-Change 00:07:36.919 Unknown (1Dh): Supported LBA-Change 00:07:36.919 00:07:36.919 Error Log 00:07:36.919 ========= 00:07:36.919 00:07:36.919 Arbitration 00:07:36.919 =========== 00:07:36.919 Arbitration Burst: no limit 00:07:36.919 00:07:36.919 Power Management 00:07:36.919 ================ 00:07:36.919 Number of Power States: 1 00:07:36.919 Current Power State: Power State #0 00:07:36.919 Power State #0: 00:07:36.919 Max Power: 25.00 W 00:07:36.919 Non-Operational State: Operational 00:07:36.919 Entry Latency: 16 microseconds 00:07:36.919 Exit Latency: 4 microseconds 00:07:36.919 Relative Read Throughput: 0 00:07:36.919 Relative Read Latency: 0 00:07:36.919 Relative Write Throughput: 0 00:07:36.919 Relative Write Latency: 0 00:07:36.919 Idle Power: Not Reported 00:07:36.919 Active Power: Not Reported 00:07:36.919 Non-Operational Permissive Mode: Not Supported 00:07:36.919 00:07:36.919 Health Information 00:07:36.919 ================== 00:07:36.919 Critical Warnings: 00:07:36.919 Available Spare Space: OK 00:07:36.919 Temperature: OK 00:07:36.919 Device Reliability: OK 00:07:36.919 Read Only: No 00:07:36.919 Volatile Memory Backup: OK 00:07:36.919 Current Temperature: 323 Kelvin (50 Celsius) 00:07:36.919 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:36.919 Available Spare: 0% 00:07:36.919 Available Spare Threshold: 0% 00:07:36.919 Life Percentage Used: 0% 00:07:36.919 Data Units Read: 678 00:07:36.919 Data Units Written: 606 00:07:36.919 Host Read Commands: 38499 00:07:36.919 Host Write Commands: 38285 00:07:36.919 Controller Busy Time: 0 minutes 00:07:36.919 Power Cycles: 0 00:07:36.919 Power On Hours: 0 hours 00:07:36.919 Unsafe Shutdowns: 0 00:07:36.919 Unrecoverable Media Errors: 0 00:07:36.919 Lifetime Error Log Entries: 0 00:07:36.919 Warning Temperature Time: 0 minutes 00:07:36.919 Critical Temperature Time: 0 minutes 00:07:36.919 00:07:36.919 Number of Queues 00:07:36.919 ================ 00:07:36.919 Number of I/O Submission Queues: 64 00:07:36.919 Number of I/O Completion Queues: 64 00:07:36.919 00:07:36.919 ZNS Specific Controller Data 00:07:36.919 ============================ 00:07:36.919 Zone Append Size Limit: 0 00:07:36.919 00:07:36.919 00:07:36.919 Active Namespaces 00:07:36.919 ================= 00:07:36.919 Namespace ID:1 00:07:36.919 Error Recovery Timeout: Unlimited 00:07:36.919 Command Set Identifier: NVM (00h) 00:07:36.919 Deallocate: Supported 00:07:36.919 Deallocated/Unwritten Error: Supported 00:07:36.919 Deallocated Read Value: All 0x00 00:07:36.919 Deallocate in Write Zeroes: Not Supported 00:07:36.919 Deallocated Guard Field: 0xFFFF 00:07:36.919 Flush: Supported 00:07:36.919 Reservation: Not Supported 00:07:36.919 Metadata Transferred as: Separate Metadata Buffer 00:07:36.919 Namespace Sharing Capabilities: Private 00:07:36.919 Size (in LBAs): 1548666 (5GiB) 00:07:36.919 Capacity (in LBAs): 1548666 (5GiB) 00:07:36.919 Utilization (in LBAs): 1548666 (5GiB) 00:07:36.919 Thin Provisioning: Not Supported 00:07:36.919 Per-NS Atomic Units: No 00:07:36.919 Maximum Single Source Range Length: 128 00:07:36.919 Maximum Copy Length: 128 00:07:36.919 Maximum Source Range Count: 128 00:07:36.919 NGUID/EUI64 Never Reused: No 00:07:36.919 Namespace Write Protected: No 00:07:36.919 Number of LBA Formats: 8 00:07:36.919 Current LBA Format: LBA Format #07 00:07:36.919 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:36.919 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:36.919 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:36.919 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:36.919 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:36.919 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:36.919 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:36.919 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:36.919 00:07:36.919 NVM Specific Namespace Data 00:07:36.919 =========================== 00:07:36.919 Logical Block Storage Tag Mask: 0 00:07:36.919 Protection Information Capabilities: 00:07:36.919 16b Guard Protection Information Storage Tag Support: No 00:07:36.919 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:36.919 Storage Tag Check Read Support: No 00:07:36.919 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.919 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.919 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.919 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.919 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.919 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.919 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.919 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.919 09:25:04 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:36.919 09:25:04 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:37.182 ===================================================== 00:07:37.182 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:37.182 ===================================================== 00:07:37.182 Controller Capabilities/Features 00:07:37.182 ================================ 00:07:37.182 Vendor ID: 1b36 00:07:37.182 Subsystem Vendor ID: 1af4 00:07:37.182 Serial Number: 12341 00:07:37.182 Model Number: QEMU NVMe Ctrl 00:07:37.182 Firmware Version: 8.0.0 00:07:37.182 Recommended Arb Burst: 6 00:07:37.182 IEEE OUI Identifier: 00 54 52 00:07:37.182 Multi-path I/O 00:07:37.182 May have multiple subsystem ports: No 00:07:37.182 May have multiple controllers: No 00:07:37.182 Associated with SR-IOV VF: No 00:07:37.182 Max Data Transfer Size: 524288 00:07:37.182 Max Number of Namespaces: 256 00:07:37.182 Max Number of I/O Queues: 64 00:07:37.182 NVMe Specification Version (VS): 1.4 00:07:37.182 NVMe Specification Version (Identify): 1.4 00:07:37.182 Maximum Queue Entries: 2048 00:07:37.182 Contiguous Queues Required: Yes 00:07:37.182 Arbitration Mechanisms Supported 00:07:37.182 Weighted Round Robin: Not Supported 00:07:37.182 Vendor Specific: Not Supported 00:07:37.182 Reset Timeout: 7500 ms 00:07:37.182 Doorbell Stride: 4 bytes 00:07:37.182 NVM Subsystem Reset: Not Supported 00:07:37.182 Command Sets Supported 00:07:37.182 NVM Command Set: Supported 00:07:37.182 Boot Partition: Not Supported 00:07:37.182 Memory Page Size Minimum: 4096 bytes 00:07:37.182 Memory Page Size Maximum: 65536 bytes 00:07:37.182 Persistent Memory Region: Not Supported 00:07:37.182 Optional Asynchronous Events Supported 00:07:37.182 Namespace Attribute Notices: Supported 00:07:37.182 Firmware Activation Notices: Not Supported 00:07:37.182 ANA Change Notices: Not Supported 00:07:37.182 PLE Aggregate Log Change Notices: Not Supported 00:07:37.182 LBA Status Info Alert Notices: Not Supported 00:07:37.182 EGE Aggregate Log Change Notices: Not Supported 00:07:37.182 Normal NVM Subsystem Shutdown event: Not Supported 00:07:37.182 Zone Descriptor Change Notices: Not Supported 00:07:37.182 Discovery Log Change Notices: Not Supported 00:07:37.182 Controller Attributes 00:07:37.182 128-bit Host Identifier: Not Supported 00:07:37.182 Non-Operational Permissive Mode: Not Supported 00:07:37.182 NVM Sets: Not Supported 00:07:37.182 Read Recovery Levels: Not Supported 00:07:37.182 Endurance Groups: Not Supported 00:07:37.182 Predictable Latency Mode: Not Supported 00:07:37.182 Traffic Based Keep ALive: Not Supported 00:07:37.182 Namespace Granularity: Not Supported 00:07:37.182 SQ Associations: Not Supported 00:07:37.182 UUID List: Not Supported 00:07:37.182 Multi-Domain Subsystem: Not Supported 00:07:37.182 Fixed Capacity Management: Not Supported 00:07:37.182 Variable Capacity Management: Not Supported 00:07:37.182 Delete Endurance Group: Not Supported 00:07:37.182 Delete NVM Set: Not Supported 00:07:37.182 Extended LBA Formats Supported: Supported 00:07:37.182 Flexible Data Placement Supported: Not Supported 00:07:37.182 00:07:37.182 Controller Memory Buffer Support 00:07:37.182 ================================ 00:07:37.182 Supported: No 00:07:37.182 00:07:37.183 Persistent Memory Region Support 00:07:37.183 ================================ 00:07:37.183 Supported: No 00:07:37.183 00:07:37.183 Admin Command Set Attributes 00:07:37.183 ============================ 00:07:37.183 Security Send/Receive: Not Supported 00:07:37.183 Format NVM: Supported 00:07:37.183 Firmware Activate/Download: Not Supported 00:07:37.183 Namespace Management: Supported 00:07:37.183 Device Self-Test: Not Supported 00:07:37.183 Directives: Supported 00:07:37.183 NVMe-MI: Not Supported 00:07:37.183 Virtualization Management: Not Supported 00:07:37.183 Doorbell Buffer Config: Supported 00:07:37.183 Get LBA Status Capability: Not Supported 00:07:37.183 Command & Feature Lockdown Capability: Not Supported 00:07:37.183 Abort Command Limit: 4 00:07:37.183 Async Event Request Limit: 4 00:07:37.183 Number of Firmware Slots: N/A 00:07:37.183 Firmware Slot 1 Read-Only: N/A 00:07:37.183 Firmware Activation Without Reset: N/A 00:07:37.183 Multiple Update Detection Support: N/A 00:07:37.183 Firmware Update Granularity: No Information Provided 00:07:37.183 Per-Namespace SMART Log: Yes 00:07:37.183 Asymmetric Namespace Access Log Page: Not Supported 00:07:37.183 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:37.183 Command Effects Log Page: Supported 00:07:37.183 Get Log Page Extended Data: Supported 00:07:37.183 Telemetry Log Pages: Not Supported 00:07:37.183 Persistent Event Log Pages: Not Supported 00:07:37.183 Supported Log Pages Log Page: May Support 00:07:37.183 Commands Supported & Effects Log Page: Not Supported 00:07:37.183 Feature Identifiers & Effects Log Page:May Support 00:07:37.183 NVMe-MI Commands & Effects Log Page: May Support 00:07:37.183 Data Area 4 for Telemetry Log: Not Supported 00:07:37.183 Error Log Page Entries Supported: 1 00:07:37.183 Keep Alive: Not Supported 00:07:37.183 00:07:37.183 NVM Command Set Attributes 00:07:37.183 ========================== 00:07:37.183 Submission Queue Entry Size 00:07:37.183 Max: 64 00:07:37.183 Min: 64 00:07:37.183 Completion Queue Entry Size 00:07:37.183 Max: 16 00:07:37.183 Min: 16 00:07:37.183 Number of Namespaces: 256 00:07:37.183 Compare Command: Supported 00:07:37.183 Write Uncorrectable Command: Not Supported 00:07:37.183 Dataset Management Command: Supported 00:07:37.183 Write Zeroes Command: Supported 00:07:37.183 Set Features Save Field: Supported 00:07:37.183 Reservations: Not Supported 00:07:37.183 Timestamp: Supported 00:07:37.183 Copy: Supported 00:07:37.183 Volatile Write Cache: Present 00:07:37.183 Atomic Write Unit (Normal): 1 00:07:37.183 Atomic Write Unit (PFail): 1 00:07:37.183 Atomic Compare & Write Unit: 1 00:07:37.183 Fused Compare & Write: Not Supported 00:07:37.183 Scatter-Gather List 00:07:37.183 SGL Command Set: Supported 00:07:37.183 SGL Keyed: Not Supported 00:07:37.183 SGL Bit Bucket Descriptor: Not Supported 00:07:37.183 SGL Metadata Pointer: Not Supported 00:07:37.183 Oversized SGL: Not Supported 00:07:37.183 SGL Metadata Address: Not Supported 00:07:37.183 SGL Offset: Not Supported 00:07:37.183 Transport SGL Data Block: Not Supported 00:07:37.183 Replay Protected Memory Block: Not Supported 00:07:37.183 00:07:37.183 Firmware Slot Information 00:07:37.183 ========================= 00:07:37.183 Active slot: 1 00:07:37.183 Slot 1 Firmware Revision: 1.0 00:07:37.183 00:07:37.183 00:07:37.183 Commands Supported and Effects 00:07:37.183 ============================== 00:07:37.183 Admin Commands 00:07:37.183 -------------- 00:07:37.183 Delete I/O Submission Queue (00h): Supported 00:07:37.183 Create I/O Submission Queue (01h): Supported 00:07:37.183 Get Log Page (02h): Supported 00:07:37.183 Delete I/O Completion Queue (04h): Supported 00:07:37.183 Create I/O Completion Queue (05h): Supported 00:07:37.183 Identify (06h): Supported 00:07:37.183 Abort (08h): Supported 00:07:37.183 Set Features (09h): Supported 00:07:37.183 Get Features (0Ah): Supported 00:07:37.183 Asynchronous Event Request (0Ch): Supported 00:07:37.183 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:37.183 Directive Send (19h): Supported 00:07:37.183 Directive Receive (1Ah): Supported 00:07:37.183 Virtualization Management (1Ch): Supported 00:07:37.183 Doorbell Buffer Config (7Ch): Supported 00:07:37.183 Format NVM (80h): Supported LBA-Change 00:07:37.183 I/O Commands 00:07:37.183 ------------ 00:07:37.183 Flush (00h): Supported LBA-Change 00:07:37.183 Write (01h): Supported LBA-Change 00:07:37.183 Read (02h): Supported 00:07:37.183 Compare (05h): Supported 00:07:37.183 Write Zeroes (08h): Supported LBA-Change 00:07:37.183 Dataset Management (09h): Supported LBA-Change 00:07:37.183 Unknown (0Ch): Supported 00:07:37.183 Unknown (12h): Supported 00:07:37.183 Copy (19h): Supported LBA-Change 00:07:37.183 Unknown (1Dh): Supported LBA-Change 00:07:37.183 00:07:37.183 Error Log 00:07:37.183 ========= 00:07:37.183 00:07:37.183 Arbitration 00:07:37.183 =========== 00:07:37.183 Arbitration Burst: no limit 00:07:37.183 00:07:37.183 Power Management 00:07:37.183 ================ 00:07:37.183 Number of Power States: 1 00:07:37.183 Current Power State: Power State #0 00:07:37.183 Power State #0: 00:07:37.183 Max Power: 25.00 W 00:07:37.183 Non-Operational State: Operational 00:07:37.183 Entry Latency: 16 microseconds 00:07:37.183 Exit Latency: 4 microseconds 00:07:37.183 Relative Read Throughput: 0 00:07:37.183 Relative Read Latency: 0 00:07:37.183 Relative Write Throughput: 0 00:07:37.183 Relative Write Latency: 0 00:07:37.183 Idle Power: Not Reported 00:07:37.183 Active Power: Not Reported 00:07:37.183 Non-Operational Permissive Mode: Not Supported 00:07:37.183 00:07:37.183 Health Information 00:07:37.183 ================== 00:07:37.183 Critical Warnings: 00:07:37.183 Available Spare Space: OK 00:07:37.183 Temperature: OK 00:07:37.183 Device Reliability: OK 00:07:37.183 Read Only: No 00:07:37.183 Volatile Memory Backup: OK 00:07:37.183 Current Temperature: 323 Kelvin (50 Celsius) 00:07:37.183 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:37.183 Available Spare: 0% 00:07:37.183 Available Spare Threshold: 0% 00:07:37.183 Life Percentage Used: 0% 00:07:37.183 Data Units Read: 1141 00:07:37.183 Data Units Written: 1014 00:07:37.183 Host Read Commands: 59076 00:07:37.183 Host Write Commands: 57968 00:07:37.183 Controller Busy Time: 0 minutes 00:07:37.183 Power Cycles: 0 00:07:37.183 Power On Hours: 0 hours 00:07:37.183 Unsafe Shutdowns: 0 00:07:37.183 Unrecoverable Media Errors: 0 00:07:37.183 Lifetime Error Log Entries: 0 00:07:37.183 Warning Temperature Time: 0 minutes 00:07:37.183 Critical Temperature Time: 0 minutes 00:07:37.183 00:07:37.183 Number of Queues 00:07:37.183 ================ 00:07:37.183 Number of I/O Submission Queues: 64 00:07:37.183 Number of I/O Completion Queues: 64 00:07:37.183 00:07:37.183 ZNS Specific Controller Data 00:07:37.183 ============================ 00:07:37.183 Zone Append Size Limit: 0 00:07:37.183 00:07:37.183 00:07:37.183 Active Namespaces 00:07:37.183 ================= 00:07:37.183 Namespace ID:1 00:07:37.183 Error Recovery Timeout: Unlimited 00:07:37.183 Command Set Identifier: NVM (00h) 00:07:37.183 Deallocate: Supported 00:07:37.183 Deallocated/Unwritten Error: Supported 00:07:37.183 Deallocated Read Value: All 0x00 00:07:37.183 Deallocate in Write Zeroes: Not Supported 00:07:37.183 Deallocated Guard Field: 0xFFFF 00:07:37.183 Flush: Supported 00:07:37.183 Reservation: Not Supported 00:07:37.183 Namespace Sharing Capabilities: Private 00:07:37.183 Size (in LBAs): 1310720 (5GiB) 00:07:37.183 Capacity (in LBAs): 1310720 (5GiB) 00:07:37.183 Utilization (in LBAs): 1310720 (5GiB) 00:07:37.183 Thin Provisioning: Not Supported 00:07:37.183 Per-NS Atomic Units: No 00:07:37.183 Maximum Single Source Range Length: 128 00:07:37.183 Maximum Copy Length: 128 00:07:37.183 Maximum Source Range Count: 128 00:07:37.183 NGUID/EUI64 Never Reused: No 00:07:37.183 Namespace Write Protected: No 00:07:37.183 Number of LBA Formats: 8 00:07:37.183 Current LBA Format: LBA Format #04 00:07:37.183 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:37.183 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:37.183 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:37.183 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:37.183 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:37.183 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:37.183 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:37.183 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:37.183 00:07:37.183 NVM Specific Namespace Data 00:07:37.183 =========================== 00:07:37.184 Logical Block Storage Tag Mask: 0 00:07:37.184 Protection Information Capabilities: 00:07:37.184 16b Guard Protection Information Storage Tag Support: No 00:07:37.184 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:37.184 Storage Tag Check Read Support: No 00:07:37.184 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.184 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.184 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.184 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.184 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.184 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.184 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.184 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.184 09:25:04 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:37.184 09:25:04 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:37.451 ===================================================== 00:07:37.451 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:37.451 ===================================================== 00:07:37.451 Controller Capabilities/Features 00:07:37.451 ================================ 00:07:37.451 Vendor ID: 1b36 00:07:37.451 Subsystem Vendor ID: 1af4 00:07:37.451 Serial Number: 12342 00:07:37.451 Model Number: QEMU NVMe Ctrl 00:07:37.451 Firmware Version: 8.0.0 00:07:37.451 Recommended Arb Burst: 6 00:07:37.451 IEEE OUI Identifier: 00 54 52 00:07:37.451 Multi-path I/O 00:07:37.451 May have multiple subsystem ports: No 00:07:37.451 May have multiple controllers: No 00:07:37.451 Associated with SR-IOV VF: No 00:07:37.451 Max Data Transfer Size: 524288 00:07:37.451 Max Number of Namespaces: 256 00:07:37.451 Max Number of I/O Queues: 64 00:07:37.451 NVMe Specification Version (VS): 1.4 00:07:37.451 NVMe Specification Version (Identify): 1.4 00:07:37.451 Maximum Queue Entries: 2048 00:07:37.451 Contiguous Queues Required: Yes 00:07:37.451 Arbitration Mechanisms Supported 00:07:37.451 Weighted Round Robin: Not Supported 00:07:37.451 Vendor Specific: Not Supported 00:07:37.451 Reset Timeout: 7500 ms 00:07:37.451 Doorbell Stride: 4 bytes 00:07:37.451 NVM Subsystem Reset: Not Supported 00:07:37.451 Command Sets Supported 00:07:37.451 NVM Command Set: Supported 00:07:37.451 Boot Partition: Not Supported 00:07:37.451 Memory Page Size Minimum: 4096 bytes 00:07:37.451 Memory Page Size Maximum: 65536 bytes 00:07:37.451 Persistent Memory Region: Not Supported 00:07:37.451 Optional Asynchronous Events Supported 00:07:37.451 Namespace Attribute Notices: Supported 00:07:37.451 Firmware Activation Notices: Not Supported 00:07:37.451 ANA Change Notices: Not Supported 00:07:37.451 PLE Aggregate Log Change Notices: Not Supported 00:07:37.451 LBA Status Info Alert Notices: Not Supported 00:07:37.451 EGE Aggregate Log Change Notices: Not Supported 00:07:37.451 Normal NVM Subsystem Shutdown event: Not Supported 00:07:37.451 Zone Descriptor Change Notices: Not Supported 00:07:37.451 Discovery Log Change Notices: Not Supported 00:07:37.451 Controller Attributes 00:07:37.451 128-bit Host Identifier: Not Supported 00:07:37.451 Non-Operational Permissive Mode: Not Supported 00:07:37.451 NVM Sets: Not Supported 00:07:37.451 Read Recovery Levels: Not Supported 00:07:37.451 Endurance Groups: Not Supported 00:07:37.451 Predictable Latency Mode: Not Supported 00:07:37.451 Traffic Based Keep ALive: Not Supported 00:07:37.451 Namespace Granularity: Not Supported 00:07:37.451 SQ Associations: Not Supported 00:07:37.451 UUID List: Not Supported 00:07:37.451 Multi-Domain Subsystem: Not Supported 00:07:37.451 Fixed Capacity Management: Not Supported 00:07:37.451 Variable Capacity Management: Not Supported 00:07:37.451 Delete Endurance Group: Not Supported 00:07:37.451 Delete NVM Set: Not Supported 00:07:37.451 Extended LBA Formats Supported: Supported 00:07:37.451 Flexible Data Placement Supported: Not Supported 00:07:37.451 00:07:37.451 Controller Memory Buffer Support 00:07:37.451 ================================ 00:07:37.451 Supported: No 00:07:37.451 00:07:37.451 Persistent Memory Region Support 00:07:37.451 ================================ 00:07:37.451 Supported: No 00:07:37.451 00:07:37.451 Admin Command Set Attributes 00:07:37.451 ============================ 00:07:37.451 Security Send/Receive: Not Supported 00:07:37.451 Format NVM: Supported 00:07:37.451 Firmware Activate/Download: Not Supported 00:07:37.451 Namespace Management: Supported 00:07:37.451 Device Self-Test: Not Supported 00:07:37.451 Directives: Supported 00:07:37.451 NVMe-MI: Not Supported 00:07:37.451 Virtualization Management: Not Supported 00:07:37.451 Doorbell Buffer Config: Supported 00:07:37.451 Get LBA Status Capability: Not Supported 00:07:37.451 Command & Feature Lockdown Capability: Not Supported 00:07:37.451 Abort Command Limit: 4 00:07:37.451 Async Event Request Limit: 4 00:07:37.451 Number of Firmware Slots: N/A 00:07:37.451 Firmware Slot 1 Read-Only: N/A 00:07:37.451 Firmware Activation Without Reset: N/A 00:07:37.451 Multiple Update Detection Support: N/A 00:07:37.451 Firmware Update Granularity: No Information Provided 00:07:37.451 Per-Namespace SMART Log: Yes 00:07:37.451 Asymmetric Namespace Access Log Page: Not Supported 00:07:37.451 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:37.451 Command Effects Log Page: Supported 00:07:37.451 Get Log Page Extended Data: Supported 00:07:37.451 Telemetry Log Pages: Not Supported 00:07:37.451 Persistent Event Log Pages: Not Supported 00:07:37.451 Supported Log Pages Log Page: May Support 00:07:37.451 Commands Supported & Effects Log Page: Not Supported 00:07:37.451 Feature Identifiers & Effects Log Page:May Support 00:07:37.451 NVMe-MI Commands & Effects Log Page: May Support 00:07:37.451 Data Area 4 for Telemetry Log: Not Supported 00:07:37.451 Error Log Page Entries Supported: 1 00:07:37.451 Keep Alive: Not Supported 00:07:37.451 00:07:37.451 NVM Command Set Attributes 00:07:37.451 ========================== 00:07:37.451 Submission Queue Entry Size 00:07:37.451 Max: 64 00:07:37.451 Min: 64 00:07:37.451 Completion Queue Entry Size 00:07:37.451 Max: 16 00:07:37.451 Min: 16 00:07:37.451 Number of Namespaces: 256 00:07:37.451 Compare Command: Supported 00:07:37.451 Write Uncorrectable Command: Not Supported 00:07:37.451 Dataset Management Command: Supported 00:07:37.451 Write Zeroes Command: Supported 00:07:37.451 Set Features Save Field: Supported 00:07:37.451 Reservations: Not Supported 00:07:37.451 Timestamp: Supported 00:07:37.451 Copy: Supported 00:07:37.451 Volatile Write Cache: Present 00:07:37.452 Atomic Write Unit (Normal): 1 00:07:37.452 Atomic Write Unit (PFail): 1 00:07:37.452 Atomic Compare & Write Unit: 1 00:07:37.452 Fused Compare & Write: Not Supported 00:07:37.452 Scatter-Gather List 00:07:37.452 SGL Command Set: Supported 00:07:37.452 SGL Keyed: Not Supported 00:07:37.452 SGL Bit Bucket Descriptor: Not Supported 00:07:37.452 SGL Metadata Pointer: Not Supported 00:07:37.452 Oversized SGL: Not Supported 00:07:37.452 SGL Metadata Address: Not Supported 00:07:37.452 SGL Offset: Not Supported 00:07:37.452 Transport SGL Data Block: Not Supported 00:07:37.452 Replay Protected Memory Block: Not Supported 00:07:37.452 00:07:37.452 Firmware Slot Information 00:07:37.452 ========================= 00:07:37.452 Active slot: 1 00:07:37.452 Slot 1 Firmware Revision: 1.0 00:07:37.452 00:07:37.452 00:07:37.452 Commands Supported and Effects 00:07:37.452 ============================== 00:07:37.452 Admin Commands 00:07:37.452 -------------- 00:07:37.452 Delete I/O Submission Queue (00h): Supported 00:07:37.452 Create I/O Submission Queue (01h): Supported 00:07:37.452 Get Log Page (02h): Supported 00:07:37.452 Delete I/O Completion Queue (04h): Supported 00:07:37.452 Create I/O Completion Queue (05h): Supported 00:07:37.452 Identify (06h): Supported 00:07:37.452 Abort (08h): Supported 00:07:37.452 Set Features (09h): Supported 00:07:37.452 Get Features (0Ah): Supported 00:07:37.452 Asynchronous Event Request (0Ch): Supported 00:07:37.452 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:37.452 Directive Send (19h): Supported 00:07:37.452 Directive Receive (1Ah): Supported 00:07:37.452 Virtualization Management (1Ch): Supported 00:07:37.452 Doorbell Buffer Config (7Ch): Supported 00:07:37.452 Format NVM (80h): Supported LBA-Change 00:07:37.452 I/O Commands 00:07:37.452 ------------ 00:07:37.452 Flush (00h): Supported LBA-Change 00:07:37.452 Write (01h): Supported LBA-Change 00:07:37.452 Read (02h): Supported 00:07:37.452 Compare (05h): Supported 00:07:37.452 Write Zeroes (08h): Supported LBA-Change 00:07:37.452 Dataset Management (09h): Supported LBA-Change 00:07:37.452 Unknown (0Ch): Supported 00:07:37.452 Unknown (12h): Supported 00:07:37.452 Copy (19h): Supported LBA-Change 00:07:37.452 Unknown (1Dh): Supported LBA-Change 00:07:37.452 00:07:37.452 Error Log 00:07:37.452 ========= 00:07:37.452 00:07:37.452 Arbitration 00:07:37.452 =========== 00:07:37.452 Arbitration Burst: no limit 00:07:37.452 00:07:37.452 Power Management 00:07:37.452 ================ 00:07:37.452 Number of Power States: 1 00:07:37.452 Current Power State: Power State #0 00:07:37.452 Power State #0: 00:07:37.452 Max Power: 25.00 W 00:07:37.452 Non-Operational State: Operational 00:07:37.452 Entry Latency: 16 microseconds 00:07:37.452 Exit Latency: 4 microseconds 00:07:37.452 Relative Read Throughput: 0 00:07:37.452 Relative Read Latency: 0 00:07:37.452 Relative Write Throughput: 0 00:07:37.452 Relative Write Latency: 0 00:07:37.452 Idle Power: Not Reported 00:07:37.452 Active Power: Not Reported 00:07:37.452 Non-Operational Permissive Mode: Not Supported 00:07:37.452 00:07:37.452 Health Information 00:07:37.452 ================== 00:07:37.452 Critical Warnings: 00:07:37.452 Available Spare Space: OK 00:07:37.452 Temperature: OK 00:07:37.452 Device Reliability: OK 00:07:37.452 Read Only: No 00:07:37.452 Volatile Memory Backup: OK 00:07:37.452 Current Temperature: 323 Kelvin (50 Celsius) 00:07:37.452 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:37.452 Available Spare: 0% 00:07:37.452 Available Spare Threshold: 0% 00:07:37.452 Life Percentage Used: 0% 00:07:37.452 Data Units Read: 2389 00:07:37.452 Data Units Written: 2176 00:07:37.452 Host Read Commands: 118804 00:07:37.452 Host Write Commands: 117073 00:07:37.452 Controller Busy Time: 0 minutes 00:07:37.452 Power Cycles: 0 00:07:37.452 Power On Hours: 0 hours 00:07:37.452 Unsafe Shutdowns: 0 00:07:37.452 Unrecoverable Media Errors: 0 00:07:37.452 Lifetime Error Log Entries: 0 00:07:37.452 Warning Temperature Time: 0 minutes 00:07:37.452 Critical Temperature Time: 0 minutes 00:07:37.452 00:07:37.452 Number of Queues 00:07:37.452 ================ 00:07:37.452 Number of I/O Submission Queues: 64 00:07:37.452 Number of I/O Completion Queues: 64 00:07:37.452 00:07:37.452 ZNS Specific Controller Data 00:07:37.452 ============================ 00:07:37.452 Zone Append Size Limit: 0 00:07:37.452 00:07:37.452 00:07:37.452 Active Namespaces 00:07:37.452 ================= 00:07:37.452 Namespace ID:1 00:07:37.452 Error Recovery Timeout: Unlimited 00:07:37.452 Command Set Identifier: NVM (00h) 00:07:37.452 Deallocate: Supported 00:07:37.452 Deallocated/Unwritten Error: Supported 00:07:37.452 Deallocated Read Value: All 0x00 00:07:37.452 Deallocate in Write Zeroes: Not Supported 00:07:37.452 Deallocated Guard Field: 0xFFFF 00:07:37.452 Flush: Supported 00:07:37.452 Reservation: Not Supported 00:07:37.452 Namespace Sharing Capabilities: Private 00:07:37.452 Size (in LBAs): 1048576 (4GiB) 00:07:37.452 Capacity (in LBAs): 1048576 (4GiB) 00:07:37.452 Utilization (in LBAs): 1048576 (4GiB) 00:07:37.452 Thin Provisioning: Not Supported 00:07:37.452 Per-NS Atomic Units: No 00:07:37.452 Maximum Single Source Range Length: 128 00:07:37.452 Maximum Copy Length: 128 00:07:37.452 Maximum Source Range Count: 128 00:07:37.452 NGUID/EUI64 Never Reused: No 00:07:37.452 Namespace Write Protected: No 00:07:37.452 Number of LBA Formats: 8 00:07:37.452 Current LBA Format: LBA Format #04 00:07:37.452 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:37.452 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:37.452 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:37.452 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:37.452 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:37.452 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:37.452 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:37.452 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:37.452 00:07:37.452 NVM Specific Namespace Data 00:07:37.452 =========================== 00:07:37.452 Logical Block Storage Tag Mask: 0 00:07:37.452 Protection Information Capabilities: 00:07:37.452 16b Guard Protection Information Storage Tag Support: No 00:07:37.452 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:37.452 Storage Tag Check Read Support: No 00:07:37.452 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.452 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.452 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.452 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.452 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.452 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.452 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.452 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.452 Namespace ID:2 00:07:37.452 Error Recovery Timeout: Unlimited 00:07:37.452 Command Set Identifier: NVM (00h) 00:07:37.452 Deallocate: Supported 00:07:37.452 Deallocated/Unwritten Error: Supported 00:07:37.452 Deallocated Read Value: All 0x00 00:07:37.452 Deallocate in Write Zeroes: Not Supported 00:07:37.452 Deallocated Guard Field: 0xFFFF 00:07:37.452 Flush: Supported 00:07:37.452 Reservation: Not Supported 00:07:37.452 Namespace Sharing Capabilities: Private 00:07:37.452 Size (in LBAs): 1048576 (4GiB) 00:07:37.452 Capacity (in LBAs): 1048576 (4GiB) 00:07:37.452 Utilization (in LBAs): 1048576 (4GiB) 00:07:37.452 Thin Provisioning: Not Supported 00:07:37.452 Per-NS Atomic Units: No 00:07:37.452 Maximum Single Source Range Length: 128 00:07:37.452 Maximum Copy Length: 128 00:07:37.452 Maximum Source Range Count: 128 00:07:37.452 NGUID/EUI64 Never Reused: No 00:07:37.452 Namespace Write Protected: No 00:07:37.452 Number of LBA Formats: 8 00:07:37.452 Current LBA Format: LBA Format #04 00:07:37.452 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:37.452 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:37.452 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:37.452 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:37.452 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:37.452 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:37.452 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:37.452 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:37.452 00:07:37.452 NVM Specific Namespace Data 00:07:37.452 =========================== 00:07:37.452 Logical Block Storage Tag Mask: 0 00:07:37.452 Protection Information Capabilities: 00:07:37.452 16b Guard Protection Information Storage Tag Support: No 00:07:37.453 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:37.453 Storage Tag Check Read Support: No 00:07:37.453 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.453 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.453 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.453 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.453 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.453 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.453 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.453 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.453 Namespace ID:3 00:07:37.453 Error Recovery Timeout: Unlimited 00:07:37.453 Command Set Identifier: NVM (00h) 00:07:37.453 Deallocate: Supported 00:07:37.453 Deallocated/Unwritten Error: Supported 00:07:37.453 Deallocated Read Value: All 0x00 00:07:37.453 Deallocate in Write Zeroes: Not Supported 00:07:37.453 Deallocated Guard Field: 0xFFFF 00:07:37.453 Flush: Supported 00:07:37.453 Reservation: Not Supported 00:07:37.453 Namespace Sharing Capabilities: Private 00:07:37.453 Size (in LBAs): 1048576 (4GiB) 00:07:37.453 Capacity (in LBAs): 1048576 (4GiB) 00:07:37.453 Utilization (in LBAs): 1048576 (4GiB) 00:07:37.453 Thin Provisioning: Not Supported 00:07:37.453 Per-NS Atomic Units: No 00:07:37.453 Maximum Single Source Range Length: 128 00:07:37.453 Maximum Copy Length: 128 00:07:37.453 Maximum Source Range Count: 128 00:07:37.453 NGUID/EUI64 Never Reused: No 00:07:37.453 Namespace Write Protected: No 00:07:37.453 Number of LBA Formats: 8 00:07:37.453 Current LBA Format: LBA Format #04 00:07:37.453 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:37.453 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:37.453 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:37.453 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:37.453 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:37.453 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:37.453 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:37.453 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:37.453 00:07:37.453 NVM Specific Namespace Data 00:07:37.453 =========================== 00:07:37.453 Logical Block Storage Tag Mask: 0 00:07:37.453 Protection Information Capabilities: 00:07:37.453 16b Guard Protection Information Storage Tag Support: No 00:07:37.453 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:37.453 Storage Tag Check Read Support: No 00:07:37.453 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.453 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.453 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.453 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.453 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.453 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.453 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.453 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.453 09:25:05 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:37.453 09:25:05 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:37.724 ===================================================== 00:07:37.724 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:37.724 ===================================================== 00:07:37.724 Controller Capabilities/Features 00:07:37.724 ================================ 00:07:37.724 Vendor ID: 1b36 00:07:37.724 Subsystem Vendor ID: 1af4 00:07:37.724 Serial Number: 12343 00:07:37.724 Model Number: QEMU NVMe Ctrl 00:07:37.724 Firmware Version: 8.0.0 00:07:37.724 Recommended Arb Burst: 6 00:07:37.724 IEEE OUI Identifier: 00 54 52 00:07:37.724 Multi-path I/O 00:07:37.724 May have multiple subsystem ports: No 00:07:37.724 May have multiple controllers: Yes 00:07:37.724 Associated with SR-IOV VF: No 00:07:37.724 Max Data Transfer Size: 524288 00:07:37.724 Max Number of Namespaces: 256 00:07:37.724 Max Number of I/O Queues: 64 00:07:37.724 NVMe Specification Version (VS): 1.4 00:07:37.724 NVMe Specification Version (Identify): 1.4 00:07:37.724 Maximum Queue Entries: 2048 00:07:37.724 Contiguous Queues Required: Yes 00:07:37.724 Arbitration Mechanisms Supported 00:07:37.724 Weighted Round Robin: Not Supported 00:07:37.724 Vendor Specific: Not Supported 00:07:37.724 Reset Timeout: 7500 ms 00:07:37.724 Doorbell Stride: 4 bytes 00:07:37.724 NVM Subsystem Reset: Not Supported 00:07:37.724 Command Sets Supported 00:07:37.724 NVM Command Set: Supported 00:07:37.724 Boot Partition: Not Supported 00:07:37.724 Memory Page Size Minimum: 4096 bytes 00:07:37.724 Memory Page Size Maximum: 65536 bytes 00:07:37.724 Persistent Memory Region: Not Supported 00:07:37.724 Optional Asynchronous Events Supported 00:07:37.724 Namespace Attribute Notices: Supported 00:07:37.724 Firmware Activation Notices: Not Supported 00:07:37.724 ANA Change Notices: Not Supported 00:07:37.724 PLE Aggregate Log Change Notices: Not Supported 00:07:37.724 LBA Status Info Alert Notices: Not Supported 00:07:37.724 EGE Aggregate Log Change Notices: Not Supported 00:07:37.724 Normal NVM Subsystem Shutdown event: Not Supported 00:07:37.724 Zone Descriptor Change Notices: Not Supported 00:07:37.724 Discovery Log Change Notices: Not Supported 00:07:37.724 Controller Attributes 00:07:37.724 128-bit Host Identifier: Not Supported 00:07:37.724 Non-Operational Permissive Mode: Not Supported 00:07:37.724 NVM Sets: Not Supported 00:07:37.724 Read Recovery Levels: Not Supported 00:07:37.724 Endurance Groups: Supported 00:07:37.724 Predictable Latency Mode: Not Supported 00:07:37.724 Traffic Based Keep ALive: Not Supported 00:07:37.724 Namespace Granularity: Not Supported 00:07:37.724 SQ Associations: Not Supported 00:07:37.724 UUID List: Not Supported 00:07:37.724 Multi-Domain Subsystem: Not Supported 00:07:37.724 Fixed Capacity Management: Not Supported 00:07:37.724 Variable Capacity Management: Not Supported 00:07:37.724 Delete Endurance Group: Not Supported 00:07:37.724 Delete NVM Set: Not Supported 00:07:37.724 Extended LBA Formats Supported: Supported 00:07:37.724 Flexible Data Placement Supported: Supported 00:07:37.724 00:07:37.724 Controller Memory Buffer Support 00:07:37.724 ================================ 00:07:37.724 Supported: No 00:07:37.724 00:07:37.724 Persistent Memory Region Support 00:07:37.724 ================================ 00:07:37.724 Supported: No 00:07:37.724 00:07:37.724 Admin Command Set Attributes 00:07:37.724 ============================ 00:07:37.724 Security Send/Receive: Not Supported 00:07:37.724 Format NVM: Supported 00:07:37.724 Firmware Activate/Download: Not Supported 00:07:37.724 Namespace Management: Supported 00:07:37.724 Device Self-Test: Not Supported 00:07:37.724 Directives: Supported 00:07:37.724 NVMe-MI: Not Supported 00:07:37.724 Virtualization Management: Not Supported 00:07:37.724 Doorbell Buffer Config: Supported 00:07:37.724 Get LBA Status Capability: Not Supported 00:07:37.724 Command & Feature Lockdown Capability: Not Supported 00:07:37.724 Abort Command Limit: 4 00:07:37.724 Async Event Request Limit: 4 00:07:37.724 Number of Firmware Slots: N/A 00:07:37.724 Firmware Slot 1 Read-Only: N/A 00:07:37.724 Firmware Activation Without Reset: N/A 00:07:37.724 Multiple Update Detection Support: N/A 00:07:37.724 Firmware Update Granularity: No Information Provided 00:07:37.724 Per-Namespace SMART Log: Yes 00:07:37.724 Asymmetric Namespace Access Log Page: Not Supported 00:07:37.724 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:37.724 Command Effects Log Page: Supported 00:07:37.724 Get Log Page Extended Data: Supported 00:07:37.724 Telemetry Log Pages: Not Supported 00:07:37.724 Persistent Event Log Pages: Not Supported 00:07:37.724 Supported Log Pages Log Page: May Support 00:07:37.724 Commands Supported & Effects Log Page: Not Supported 00:07:37.724 Feature Identifiers & Effects Log Page:May Support 00:07:37.724 NVMe-MI Commands & Effects Log Page: May Support 00:07:37.724 Data Area 4 for Telemetry Log: Not Supported 00:07:37.724 Error Log Page Entries Supported: 1 00:07:37.724 Keep Alive: Not Supported 00:07:37.724 00:07:37.724 NVM Command Set Attributes 00:07:37.724 ========================== 00:07:37.724 Submission Queue Entry Size 00:07:37.724 Max: 64 00:07:37.724 Min: 64 00:07:37.724 Completion Queue Entry Size 00:07:37.724 Max: 16 00:07:37.724 Min: 16 00:07:37.724 Number of Namespaces: 256 00:07:37.724 Compare Command: Supported 00:07:37.724 Write Uncorrectable Command: Not Supported 00:07:37.724 Dataset Management Command: Supported 00:07:37.724 Write Zeroes Command: Supported 00:07:37.724 Set Features Save Field: Supported 00:07:37.724 Reservations: Not Supported 00:07:37.724 Timestamp: Supported 00:07:37.724 Copy: Supported 00:07:37.724 Volatile Write Cache: Present 00:07:37.724 Atomic Write Unit (Normal): 1 00:07:37.724 Atomic Write Unit (PFail): 1 00:07:37.724 Atomic Compare & Write Unit: 1 00:07:37.724 Fused Compare & Write: Not Supported 00:07:37.724 Scatter-Gather List 00:07:37.724 SGL Command Set: Supported 00:07:37.724 SGL Keyed: Not Supported 00:07:37.724 SGL Bit Bucket Descriptor: Not Supported 00:07:37.724 SGL Metadata Pointer: Not Supported 00:07:37.724 Oversized SGL: Not Supported 00:07:37.724 SGL Metadata Address: Not Supported 00:07:37.724 SGL Offset: Not Supported 00:07:37.724 Transport SGL Data Block: Not Supported 00:07:37.724 Replay Protected Memory Block: Not Supported 00:07:37.724 00:07:37.724 Firmware Slot Information 00:07:37.724 ========================= 00:07:37.724 Active slot: 1 00:07:37.724 Slot 1 Firmware Revision: 1.0 00:07:37.724 00:07:37.724 00:07:37.724 Commands Supported and Effects 00:07:37.724 ============================== 00:07:37.724 Admin Commands 00:07:37.724 -------------- 00:07:37.724 Delete I/O Submission Queue (00h): Supported 00:07:37.724 Create I/O Submission Queue (01h): Supported 00:07:37.724 Get Log Page (02h): Supported 00:07:37.724 Delete I/O Completion Queue (04h): Supported 00:07:37.724 Create I/O Completion Queue (05h): Supported 00:07:37.725 Identify (06h): Supported 00:07:37.725 Abort (08h): Supported 00:07:37.725 Set Features (09h): Supported 00:07:37.725 Get Features (0Ah): Supported 00:07:37.725 Asynchronous Event Request (0Ch): Supported 00:07:37.725 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:37.725 Directive Send (19h): Supported 00:07:37.725 Directive Receive (1Ah): Supported 00:07:37.725 Virtualization Management (1Ch): Supported 00:07:37.725 Doorbell Buffer Config (7Ch): Supported 00:07:37.725 Format NVM (80h): Supported LBA-Change 00:07:37.725 I/O Commands 00:07:37.725 ------------ 00:07:37.725 Flush (00h): Supported LBA-Change 00:07:37.725 Write (01h): Supported LBA-Change 00:07:37.725 Read (02h): Supported 00:07:37.725 Compare (05h): Supported 00:07:37.725 Write Zeroes (08h): Supported LBA-Change 00:07:37.725 Dataset Management (09h): Supported LBA-Change 00:07:37.725 Unknown (0Ch): Supported 00:07:37.725 Unknown (12h): Supported 00:07:37.725 Copy (19h): Supported LBA-Change 00:07:37.725 Unknown (1Dh): Supported LBA-Change 00:07:37.725 00:07:37.725 Error Log 00:07:37.725 ========= 00:07:37.725 00:07:37.725 Arbitration 00:07:37.725 =========== 00:07:37.725 Arbitration Burst: no limit 00:07:37.725 00:07:37.725 Power Management 00:07:37.725 ================ 00:07:37.725 Number of Power States: 1 00:07:37.725 Current Power State: Power State #0 00:07:37.725 Power State #0: 00:07:37.725 Max Power: 25.00 W 00:07:37.725 Non-Operational State: Operational 00:07:37.725 Entry Latency: 16 microseconds 00:07:37.725 Exit Latency: 4 microseconds 00:07:37.725 Relative Read Throughput: 0 00:07:37.725 Relative Read Latency: 0 00:07:37.725 Relative Write Throughput: 0 00:07:37.725 Relative Write Latency: 0 00:07:37.725 Idle Power: Not Reported 00:07:37.725 Active Power: Not Reported 00:07:37.725 Non-Operational Permissive Mode: Not Supported 00:07:37.725 00:07:37.725 Health Information 00:07:37.725 ================== 00:07:37.725 Critical Warnings: 00:07:37.725 Available Spare Space: OK 00:07:37.725 Temperature: OK 00:07:37.725 Device Reliability: OK 00:07:37.725 Read Only: No 00:07:37.725 Volatile Memory Backup: OK 00:07:37.725 Current Temperature: 323 Kelvin (50 Celsius) 00:07:37.725 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:37.725 Available Spare: 0% 00:07:37.725 Available Spare Threshold: 0% 00:07:37.725 Life Percentage Used: 0% 00:07:37.725 Data Units Read: 1088 00:07:37.725 Data Units Written: 1017 00:07:37.725 Host Read Commands: 41940 00:07:37.725 Host Write Commands: 41363 00:07:37.725 Controller Busy Time: 0 minutes 00:07:37.725 Power Cycles: 0 00:07:37.725 Power On Hours: 0 hours 00:07:37.725 Unsafe Shutdowns: 0 00:07:37.725 Unrecoverable Media Errors: 0 00:07:37.725 Lifetime Error Log Entries: 0 00:07:37.725 Warning Temperature Time: 0 minutes 00:07:37.725 Critical Temperature Time: 0 minutes 00:07:37.725 00:07:37.725 Number of Queues 00:07:37.725 ================ 00:07:37.725 Number of I/O Submission Queues: 64 00:07:37.725 Number of I/O Completion Queues: 64 00:07:37.725 00:07:37.725 ZNS Specific Controller Data 00:07:37.725 ============================ 00:07:37.725 Zone Append Size Limit: 0 00:07:37.725 00:07:37.725 00:07:37.725 Active Namespaces 00:07:37.725 ================= 00:07:37.725 Namespace ID:1 00:07:37.725 Error Recovery Timeout: Unlimited 00:07:37.725 Command Set Identifier: NVM (00h) 00:07:37.725 Deallocate: Supported 00:07:37.725 Deallocated/Unwritten Error: Supported 00:07:37.725 Deallocated Read Value: All 0x00 00:07:37.725 Deallocate in Write Zeroes: Not Supported 00:07:37.725 Deallocated Guard Field: 0xFFFF 00:07:37.725 Flush: Supported 00:07:37.725 Reservation: Not Supported 00:07:37.725 Namespace Sharing Capabilities: Multiple Controllers 00:07:37.725 Size (in LBAs): 262144 (1GiB) 00:07:37.725 Capacity (in LBAs): 262144 (1GiB) 00:07:37.725 Utilization (in LBAs): 262144 (1GiB) 00:07:37.725 Thin Provisioning: Not Supported 00:07:37.725 Per-NS Atomic Units: No 00:07:37.725 Maximum Single Source Range Length: 128 00:07:37.725 Maximum Copy Length: 128 00:07:37.725 Maximum Source Range Count: 128 00:07:37.725 NGUID/EUI64 Never Reused: No 00:07:37.725 Namespace Write Protected: No 00:07:37.725 Endurance group ID: 1 00:07:37.725 Number of LBA Formats: 8 00:07:37.725 Current LBA Format: LBA Format #04 00:07:37.725 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:37.725 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:37.725 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:37.725 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:37.725 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:37.725 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:37.725 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:37.725 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:37.725 00:07:37.725 Get Feature FDP: 00:07:37.725 ================ 00:07:37.725 Enabled: Yes 00:07:37.725 FDP configuration index: 0 00:07:37.725 00:07:37.725 FDP configurations log page 00:07:37.725 =========================== 00:07:37.725 Number of FDP configurations: 1 00:07:37.725 Version: 0 00:07:37.725 Size: 112 00:07:37.725 FDP Configuration Descriptor: 0 00:07:37.725 Descriptor Size: 96 00:07:37.725 Reclaim Group Identifier format: 2 00:07:37.725 FDP Volatile Write Cache: Not Present 00:07:37.725 FDP Configuration: Valid 00:07:37.725 Vendor Specific Size: 0 00:07:37.725 Number of Reclaim Groups: 2 00:07:37.725 Number of Recalim Unit Handles: 8 00:07:37.725 Max Placement Identifiers: 128 00:07:37.725 Number of Namespaces Suppprted: 256 00:07:37.725 Reclaim unit Nominal Size: 6000000 bytes 00:07:37.725 Estimated Reclaim Unit Time Limit: Not Reported 00:07:37.725 RUH Desc #000: RUH Type: Initially Isolated 00:07:37.725 RUH Desc #001: RUH Type: Initially Isolated 00:07:37.725 RUH Desc #002: RUH Type: Initially Isolated 00:07:37.725 RUH Desc #003: RUH Type: Initially Isolated 00:07:37.725 RUH Desc #004: RUH Type: Initially Isolated 00:07:37.725 RUH Desc #005: RUH Type: Initially Isolated 00:07:37.725 RUH Desc #006: RUH Type: Initially Isolated 00:07:37.725 RUH Desc #007: RUH Type: Initially Isolated 00:07:37.725 00:07:37.725 FDP reclaim unit handle usage log page 00:07:37.725 ====================================== 00:07:37.725 Number of Reclaim Unit Handles: 8 00:07:37.725 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:37.725 RUH Usage Desc #001: RUH Attributes: Unused 00:07:37.725 RUH Usage Desc #002: RUH Attributes: Unused 00:07:37.725 RUH Usage Desc #003: RUH Attributes: Unused 00:07:37.725 RUH Usage Desc #004: RUH Attributes: Unused 00:07:37.725 RUH Usage Desc #005: RUH Attributes: Unused 00:07:37.725 RUH Usage Desc #006: RUH Attributes: Unused 00:07:37.725 RUH Usage Desc #007: RUH Attributes: Unused 00:07:37.725 00:07:37.725 FDP statistics log page 00:07:37.725 ======================= 00:07:37.725 Host bytes with metadata written: 621584384 00:07:37.725 Media bytes with metadata written: 621666304 00:07:37.725 Media bytes erased: 0 00:07:37.725 00:07:37.725 FDP events log page 00:07:37.725 =================== 00:07:37.725 Number of FDP events: 0 00:07:37.725 00:07:37.725 NVM Specific Namespace Data 00:07:37.725 =========================== 00:07:37.725 Logical Block Storage Tag Mask: 0 00:07:37.725 Protection Information Capabilities: 00:07:37.725 16b Guard Protection Information Storage Tag Support: No 00:07:37.725 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:37.725 Storage Tag Check Read Support: No 00:07:37.725 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.725 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.725 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.725 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.725 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.725 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.725 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.725 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.725 00:07:37.725 real 0m1.191s 00:07:37.725 user 0m0.417s 00:07:37.725 sys 0m0.552s 00:07:37.725 09:25:05 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.725 09:25:05 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:37.725 ************************************ 00:07:37.725 END TEST nvme_identify 00:07:37.725 ************************************ 00:07:37.725 09:25:05 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:37.725 09:25:05 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:37.725 09:25:05 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.725 09:25:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:37.725 ************************************ 00:07:37.725 START TEST nvme_perf 00:07:37.725 ************************************ 00:07:37.726 09:25:05 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:07:37.726 09:25:05 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:39.114 Initializing NVMe Controllers 00:07:39.114 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:39.114 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:39.114 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:39.114 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:39.114 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:39.114 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:39.114 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:39.114 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:39.114 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:39.114 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:39.114 Initialization complete. Launching workers. 00:07:39.114 ======================================================== 00:07:39.114 Latency(us) 00:07:39.114 Device Information : IOPS MiB/s Average min max 00:07:39.114 PCIE (0000:00:10.0) NSID 1 from core 0: 12960.38 151.88 9879.92 5966.96 34856.66 00:07:39.114 PCIE (0000:00:11.0) NSID 1 from core 0: 12960.38 151.88 9872.32 5591.29 34187.42 00:07:39.114 PCIE (0000:00:13.0) NSID 1 from core 0: 12960.38 151.88 9861.82 4823.15 34287.61 00:07:39.114 PCIE (0000:00:12.0) NSID 1 from core 0: 12960.38 151.88 9851.39 4403.05 33712.70 00:07:39.114 PCIE (0000:00:12.0) NSID 2 from core 0: 12960.38 151.88 9841.06 3961.89 33140.17 00:07:39.114 PCIE (0000:00:12.0) NSID 3 from core 0: 13024.22 152.63 9782.50 3667.43 27194.85 00:07:39.114 ======================================================== 00:07:39.114 Total : 77826.10 912.02 9848.11 3667.43 34856.66 00:07:39.114 00:07:39.114 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:39.114 ================================================================================= 00:07:39.114 1.00000% : 8065.969us 00:07:39.114 10.00000% : 8519.680us 00:07:39.114 25.00000% : 8822.154us 00:07:39.114 50.00000% : 9275.865us 00:07:39.114 75.00000% : 9931.225us 00:07:39.114 90.00000% : 11897.305us 00:07:39.114 95.00000% : 13308.849us 00:07:39.114 98.00000% : 14518.745us 00:07:39.114 99.00000% : 15627.815us 00:07:39.114 99.50000% : 28432.542us 00:07:39.114 99.90000% : 34683.668us 00:07:39.114 99.99000% : 34885.317us 00:07:39.114 99.99900% : 34885.317us 00:07:39.114 99.99990% : 34885.317us 00:07:39.114 99.99999% : 34885.317us 00:07:39.114 00:07:39.114 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:39.114 ================================================================================= 00:07:39.114 1.00000% : 8166.794us 00:07:39.114 10.00000% : 8570.092us 00:07:39.114 25.00000% : 8872.566us 00:07:39.114 50.00000% : 9225.452us 00:07:39.114 75.00000% : 9931.225us 00:07:39.114 90.00000% : 11846.892us 00:07:39.114 95.00000% : 13208.025us 00:07:39.114 98.00000% : 14619.569us 00:07:39.114 99.00000% : 15224.517us 00:07:39.114 99.50000% : 28230.892us 00:07:39.114 99.90000% : 34078.720us 00:07:39.114 99.99000% : 34280.369us 00:07:39.114 99.99900% : 34280.369us 00:07:39.114 99.99990% : 34280.369us 00:07:39.114 99.99999% : 34280.369us 00:07:39.114 00:07:39.114 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:39.114 ================================================================================= 00:07:39.114 1.00000% : 8065.969us 00:07:39.114 10.00000% : 8519.680us 00:07:39.114 25.00000% : 8822.154us 00:07:39.114 50.00000% : 9225.452us 00:07:39.114 75.00000% : 9880.812us 00:07:39.114 90.00000% : 12048.542us 00:07:39.114 95.00000% : 13107.200us 00:07:39.114 98.00000% : 14821.218us 00:07:39.114 99.00000% : 16031.114us 00:07:39.114 99.50000% : 28230.892us 00:07:39.114 99.90000% : 34078.720us 00:07:39.114 99.99000% : 34280.369us 00:07:39.114 99.99900% : 34482.018us 00:07:39.114 99.99990% : 34482.018us 00:07:39.114 99.99999% : 34482.018us 00:07:39.114 00:07:39.114 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:39.114 ================================================================================= 00:07:39.114 1.00000% : 7965.145us 00:07:39.114 10.00000% : 8570.092us 00:07:39.114 25.00000% : 8872.566us 00:07:39.114 50.00000% : 9225.452us 00:07:39.114 75.00000% : 9880.812us 00:07:39.114 90.00000% : 12098.954us 00:07:39.114 95.00000% : 13006.375us 00:07:39.114 98.00000% : 14821.218us 00:07:39.114 99.00000% : 15728.640us 00:07:39.114 99.50000% : 27827.594us 00:07:39.114 99.90000% : 33675.422us 00:07:39.114 99.99000% : 33877.071us 00:07:39.114 99.99900% : 33877.071us 00:07:39.114 99.99990% : 33877.071us 00:07:39.114 99.99999% : 33877.071us 00:07:39.114 00:07:39.114 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:39.114 ================================================================================= 00:07:39.114 1.00000% : 7914.732us 00:07:39.114 10.00000% : 8570.092us 00:07:39.114 25.00000% : 8872.566us 00:07:39.114 50.00000% : 9225.452us 00:07:39.114 75.00000% : 9880.812us 00:07:39.114 90.00000% : 12098.954us 00:07:39.114 95.00000% : 13107.200us 00:07:39.114 98.00000% : 14619.569us 00:07:39.114 99.00000% : 15930.289us 00:07:39.114 99.50000% : 27222.646us 00:07:39.114 99.90000% : 33070.474us 00:07:39.114 99.99000% : 33272.123us 00:07:39.114 99.99900% : 33272.123us 00:07:39.114 99.99990% : 33272.123us 00:07:39.114 99.99999% : 33272.123us 00:07:39.114 00:07:39.114 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:39.114 ================================================================================= 00:07:39.114 1.00000% : 7914.732us 00:07:39.114 10.00000% : 8570.092us 00:07:39.114 25.00000% : 8872.566us 00:07:39.114 50.00000% : 9225.452us 00:07:39.114 75.00000% : 9880.812us 00:07:39.114 90.00000% : 11998.129us 00:07:39.114 95.00000% : 13208.025us 00:07:39.114 98.00000% : 14619.569us 00:07:39.114 99.00000% : 15829.465us 00:07:39.114 99.50000% : 21173.169us 00:07:39.114 99.90000% : 27020.997us 00:07:39.114 99.99000% : 27222.646us 00:07:39.114 99.99900% : 27222.646us 00:07:39.114 99.99990% : 27222.646us 00:07:39.114 99.99999% : 27222.646us 00:07:39.114 00:07:39.114 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:39.114 ============================================================================== 00:07:39.114 Range in us Cumulative IO count 00:07:39.114 5948.652 - 5973.858: 0.0231% ( 3) 00:07:39.114 5973.858 - 5999.065: 0.0539% ( 4) 00:07:39.114 5999.065 - 6024.271: 0.0693% ( 2) 00:07:39.114 6024.271 - 6049.477: 0.0770% ( 1) 00:07:39.114 6049.477 - 6074.683: 0.0847% ( 1) 00:07:39.114 6074.683 - 6099.889: 0.1078% ( 3) 00:07:39.114 6099.889 - 6125.095: 0.1155% ( 1) 00:07:39.114 6125.095 - 6150.302: 0.1308% ( 2) 00:07:39.114 6150.302 - 6175.508: 0.1385% ( 1) 00:07:39.114 6175.508 - 6200.714: 0.1539% ( 2) 00:07:39.114 6200.714 - 6225.920: 0.1616% ( 1) 00:07:39.114 6225.920 - 6251.126: 0.1847% ( 3) 00:07:39.114 6251.126 - 6276.332: 0.1924% ( 1) 00:07:39.114 6276.332 - 6301.538: 0.2155% ( 3) 00:07:39.114 6301.538 - 6326.745: 0.2232% ( 1) 00:07:39.114 6326.745 - 6351.951: 0.2386% ( 2) 00:07:39.114 6351.951 - 6377.157: 0.2540% ( 2) 00:07:39.114 6377.157 - 6402.363: 0.2617% ( 1) 00:07:39.114 6402.363 - 6427.569: 0.2848% ( 3) 00:07:39.114 6427.569 - 6452.775: 0.2925% ( 1) 00:07:39.114 6452.775 - 6503.188: 0.3233% ( 4) 00:07:39.114 6503.188 - 6553.600: 0.3464% ( 3) 00:07:39.114 6553.600 - 6604.012: 0.3849% ( 5) 00:07:39.114 6604.012 - 6654.425: 0.4079% ( 3) 00:07:39.114 6654.425 - 6704.837: 0.4387% ( 4) 00:07:39.114 6704.837 - 6755.249: 0.4695% ( 4) 00:07:39.114 6755.249 - 6805.662: 0.4926% ( 3) 00:07:39.114 7813.908 - 7864.320: 0.5465% ( 7) 00:07:39.114 7864.320 - 7914.732: 0.6542% ( 14) 00:07:39.114 7914.732 - 7965.145: 0.7620% ( 14) 00:07:39.114 7965.145 - 8015.557: 0.8929% ( 17) 00:07:39.114 8015.557 - 8065.969: 1.1700% ( 36) 00:07:39.114 8065.969 - 8116.382: 1.4470% ( 36) 00:07:39.114 8116.382 - 8166.794: 1.9243% ( 62) 00:07:39.114 8166.794 - 8217.206: 2.5323% ( 79) 00:07:39.114 8217.206 - 8267.618: 3.3944% ( 112) 00:07:39.114 8267.618 - 8318.031: 4.3103% ( 119) 00:07:39.114 8318.031 - 8368.443: 5.5111% ( 156) 00:07:39.114 8368.443 - 8418.855: 6.8427% ( 173) 00:07:39.114 8418.855 - 8469.268: 8.5437% ( 221) 00:07:39.114 8469.268 - 8519.680: 10.5219% ( 257) 00:07:39.114 8519.680 - 8570.092: 12.5385% ( 262) 00:07:39.114 8570.092 - 8620.505: 14.8399% ( 299) 00:07:39.114 8620.505 - 8670.917: 17.2106% ( 308) 00:07:39.114 8670.917 - 8721.329: 19.8815% ( 347) 00:07:39.114 8721.329 - 8771.742: 22.5292% ( 344) 00:07:39.114 8771.742 - 8822.154: 25.3156% ( 362) 00:07:39.114 8822.154 - 8872.566: 28.3328% ( 392) 00:07:39.114 8872.566 - 8922.978: 31.1730% ( 369) 00:07:39.114 8922.978 - 8973.391: 34.2057% ( 394) 00:07:39.114 8973.391 - 9023.803: 37.4923% ( 427) 00:07:39.114 9023.803 - 9074.215: 40.5557% ( 398) 00:07:39.114 9074.215 - 9124.628: 43.5884% ( 394) 00:07:39.114 9124.628 - 9175.040: 46.6364% ( 396) 00:07:39.114 9175.040 - 9225.452: 49.6998% ( 398) 00:07:39.114 9225.452 - 9275.865: 52.4400% ( 356) 00:07:39.114 9275.865 - 9326.277: 55.1570% ( 353) 00:07:39.114 9326.277 - 9376.689: 58.0357% ( 374) 00:07:39.114 9376.689 - 9427.102: 60.4988% ( 320) 00:07:39.114 9427.102 - 9477.514: 62.7694% ( 295) 00:07:39.114 9477.514 - 9527.926: 64.9015% ( 277) 00:07:39.114 9527.926 - 9578.338: 66.9258% ( 263) 00:07:39.114 9578.338 - 9628.751: 68.6038% ( 218) 00:07:39.114 9628.751 - 9679.163: 70.0046% ( 182) 00:07:39.114 9679.163 - 9729.575: 71.2746% ( 165) 00:07:39.114 9729.575 - 9779.988: 72.4061% ( 147) 00:07:39.114 9779.988 - 9830.400: 73.3913% ( 128) 00:07:39.114 9830.400 - 9880.812: 74.2688% ( 114) 00:07:39.114 9880.812 - 9931.225: 75.0770% ( 105) 00:07:39.114 9931.225 - 9981.637: 75.6004% ( 68) 00:07:39.114 9981.637 - 10032.049: 76.1392% ( 70) 00:07:39.114 10032.049 - 10082.462: 76.5625% ( 55) 00:07:39.114 10082.462 - 10132.874: 76.9089% ( 45) 00:07:39.114 10132.874 - 10183.286: 77.2937% ( 50) 00:07:39.114 10183.286 - 10233.698: 77.7017% ( 53) 00:07:39.114 10233.698 - 10284.111: 78.0403% ( 44) 00:07:39.114 10284.111 - 10334.523: 78.4021% ( 47) 00:07:39.114 10334.523 - 10384.935: 78.8023% ( 52) 00:07:39.114 10384.935 - 10435.348: 79.1179% ( 41) 00:07:39.114 10435.348 - 10485.760: 79.4797% ( 47) 00:07:39.114 10485.760 - 10536.172: 79.8491% ( 48) 00:07:39.114 10536.172 - 10586.585: 80.1647% ( 41) 00:07:39.114 10586.585 - 10636.997: 80.5496% ( 50) 00:07:39.114 10636.997 - 10687.409: 80.9652% ( 54) 00:07:39.114 10687.409 - 10737.822: 81.2885% ( 42) 00:07:39.114 10737.822 - 10788.234: 81.7811% ( 64) 00:07:39.114 10788.234 - 10838.646: 82.1813% ( 52) 00:07:39.114 10838.646 - 10889.058: 82.5662% ( 50) 00:07:39.114 10889.058 - 10939.471: 83.0357% ( 61) 00:07:39.114 10939.471 - 10989.883: 83.4360% ( 52) 00:07:39.114 10989.883 - 11040.295: 83.8670% ( 56) 00:07:39.114 11040.295 - 11090.708: 84.3596% ( 64) 00:07:39.114 11090.708 - 11141.120: 84.7752% ( 54) 00:07:39.114 11141.120 - 11191.532: 85.1832% ( 53) 00:07:39.115 11191.532 - 11241.945: 85.5757% ( 51) 00:07:39.115 11241.945 - 11292.357: 85.9683% ( 51) 00:07:39.115 11292.357 - 11342.769: 86.4840% ( 67) 00:07:39.115 11342.769 - 11393.182: 86.8534% ( 48) 00:07:39.115 11393.182 - 11443.594: 87.1613% ( 40) 00:07:39.115 11443.594 - 11494.006: 87.5308% ( 48) 00:07:39.115 11494.006 - 11544.418: 87.9387% ( 53) 00:07:39.115 11544.418 - 11594.831: 88.2851% ( 45) 00:07:39.115 11594.831 - 11645.243: 88.6776% ( 51) 00:07:39.115 11645.243 - 11695.655: 89.0625% ( 50) 00:07:39.115 11695.655 - 11746.068: 89.3550% ( 38) 00:07:39.115 11746.068 - 11796.480: 89.7091% ( 46) 00:07:39.115 11796.480 - 11846.892: 89.8861% ( 23) 00:07:39.115 11846.892 - 11897.305: 90.0939% ( 27) 00:07:39.115 11897.305 - 11947.717: 90.3017% ( 27) 00:07:39.115 11947.717 - 11998.129: 90.5018% ( 26) 00:07:39.115 11998.129 - 12048.542: 90.7174% ( 28) 00:07:39.115 12048.542 - 12098.954: 90.9252% ( 27) 00:07:39.115 12098.954 - 12149.366: 91.1330% ( 27) 00:07:39.115 12149.366 - 12199.778: 91.3100% ( 23) 00:07:39.115 12199.778 - 12250.191: 91.5563% ( 32) 00:07:39.115 12250.191 - 12300.603: 91.6872% ( 17) 00:07:39.115 12300.603 - 12351.015: 91.8873% ( 26) 00:07:39.115 12351.015 - 12401.428: 92.0951% ( 27) 00:07:39.115 12401.428 - 12451.840: 92.3183% ( 29) 00:07:39.115 12451.840 - 12502.252: 92.5108% ( 25) 00:07:39.115 12502.252 - 12552.665: 92.6955% ( 24) 00:07:39.115 12552.665 - 12603.077: 92.9110% ( 28) 00:07:39.115 12603.077 - 12653.489: 93.1111% ( 26) 00:07:39.115 12653.489 - 12703.902: 93.3113% ( 26) 00:07:39.115 12703.902 - 12754.314: 93.4806% ( 22) 00:07:39.115 12754.314 - 12804.726: 93.6115% ( 17) 00:07:39.115 12804.726 - 12855.138: 93.7423% ( 17) 00:07:39.115 12855.138 - 12905.551: 93.9424% ( 26) 00:07:39.115 12905.551 - 13006.375: 94.2272% ( 37) 00:07:39.115 13006.375 - 13107.200: 94.6352% ( 53) 00:07:39.115 13107.200 - 13208.025: 94.9661% ( 43) 00:07:39.115 13208.025 - 13308.849: 95.2586% ( 38) 00:07:39.115 13308.849 - 13409.674: 95.5588% ( 39) 00:07:39.115 13409.674 - 13510.498: 95.9129% ( 46) 00:07:39.115 13510.498 - 13611.323: 96.2131% ( 39) 00:07:39.115 13611.323 - 13712.148: 96.4978% ( 37) 00:07:39.115 13712.148 - 13812.972: 96.7518% ( 33) 00:07:39.115 13812.972 - 13913.797: 96.9751% ( 29) 00:07:39.115 13913.797 - 14014.622: 97.1521% ( 23) 00:07:39.115 14014.622 - 14115.446: 97.4215% ( 35) 00:07:39.115 14115.446 - 14216.271: 97.5677% ( 19) 00:07:39.115 14216.271 - 14317.095: 97.7833% ( 28) 00:07:39.115 14317.095 - 14417.920: 97.9449% ( 21) 00:07:39.115 14417.920 - 14518.745: 98.0450% ( 13) 00:07:39.115 14518.745 - 14619.569: 98.2066% ( 21) 00:07:39.115 14619.569 - 14720.394: 98.3451% ( 18) 00:07:39.115 14720.394 - 14821.218: 98.4837% ( 18) 00:07:39.115 14821.218 - 14922.043: 98.5530% ( 9) 00:07:39.115 14922.043 - 15022.868: 98.6068% ( 7) 00:07:39.115 15022.868 - 15123.692: 98.6992% ( 12) 00:07:39.115 15123.692 - 15224.517: 98.7685% ( 9) 00:07:39.115 15224.517 - 15325.342: 98.8454% ( 10) 00:07:39.115 15325.342 - 15426.166: 98.9224% ( 10) 00:07:39.115 15426.166 - 15526.991: 98.9840% ( 8) 00:07:39.115 15526.991 - 15627.815: 99.0148% ( 4) 00:07:39.115 27020.997 - 27222.646: 99.0533% ( 5) 00:07:39.115 27222.646 - 27424.295: 99.1225% ( 9) 00:07:39.115 27424.295 - 27625.945: 99.1995% ( 10) 00:07:39.115 27625.945 - 27827.594: 99.2919% ( 12) 00:07:39.115 27827.594 - 28029.243: 99.3381% ( 6) 00:07:39.115 28029.243 - 28230.892: 99.4458% ( 14) 00:07:39.115 28230.892 - 28432.542: 99.5074% ( 8) 00:07:39.115 33473.772 - 33675.422: 99.5228% ( 2) 00:07:39.115 33675.422 - 33877.071: 99.6075% ( 11) 00:07:39.115 33877.071 - 34078.720: 99.6998% ( 12) 00:07:39.115 34078.720 - 34280.369: 99.7691% ( 9) 00:07:39.115 34280.369 - 34482.018: 99.8692% ( 13) 00:07:39.115 34482.018 - 34683.668: 99.9384% ( 9) 00:07:39.115 34683.668 - 34885.317: 100.0000% ( 8) 00:07:39.115 00:07:39.115 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:39.115 ============================================================================== 00:07:39.115 Range in us Cumulative IO count 00:07:39.115 5570.560 - 5595.766: 0.0077% ( 1) 00:07:39.115 5595.766 - 5620.972: 0.0616% ( 7) 00:07:39.115 5620.972 - 5646.178: 0.0847% ( 3) 00:07:39.115 5646.178 - 5671.385: 0.1001% ( 2) 00:07:39.115 5671.385 - 5696.591: 0.1078% ( 1) 00:07:39.115 5696.591 - 5721.797: 0.1232% ( 2) 00:07:39.115 5747.003 - 5772.209: 0.1308% ( 1) 00:07:39.115 5772.209 - 5797.415: 0.1462% ( 2) 00:07:39.115 5797.415 - 5822.622: 0.1616% ( 2) 00:07:39.115 5822.622 - 5847.828: 0.1770% ( 2) 00:07:39.115 5847.828 - 5873.034: 0.1924% ( 2) 00:07:39.115 5873.034 - 5898.240: 0.2155% ( 3) 00:07:39.115 5898.240 - 5923.446: 0.2309% ( 2) 00:07:39.115 5923.446 - 5948.652: 0.2386% ( 1) 00:07:39.115 5948.652 - 5973.858: 0.2617% ( 3) 00:07:39.115 5973.858 - 5999.065: 0.2771% ( 2) 00:07:39.115 5999.065 - 6024.271: 0.2925% ( 2) 00:07:39.115 6024.271 - 6049.477: 0.3079% ( 2) 00:07:39.115 6049.477 - 6074.683: 0.3233% ( 2) 00:07:39.115 6074.683 - 6099.889: 0.3387% ( 2) 00:07:39.115 6099.889 - 6125.095: 0.3541% ( 2) 00:07:39.115 6125.095 - 6150.302: 0.3695% ( 2) 00:07:39.115 6150.302 - 6175.508: 0.3849% ( 2) 00:07:39.115 6175.508 - 6200.714: 0.4002% ( 2) 00:07:39.115 6200.714 - 6225.920: 0.4233% ( 3) 00:07:39.115 6225.920 - 6251.126: 0.4387% ( 2) 00:07:39.115 6251.126 - 6276.332: 0.4541% ( 2) 00:07:39.115 6276.332 - 6301.538: 0.4695% ( 2) 00:07:39.115 6301.538 - 6326.745: 0.4926% ( 3) 00:07:39.115 7864.320 - 7914.732: 0.5003% ( 1) 00:07:39.115 7914.732 - 7965.145: 0.5311% ( 4) 00:07:39.115 7965.145 - 8015.557: 0.5850% ( 7) 00:07:39.115 8015.557 - 8065.969: 0.7235% ( 18) 00:07:39.115 8065.969 - 8116.382: 0.8775% ( 20) 00:07:39.115 8116.382 - 8166.794: 1.2315% ( 46) 00:07:39.115 8166.794 - 8217.206: 1.7087% ( 62) 00:07:39.115 8217.206 - 8267.618: 2.3938% ( 89) 00:07:39.115 8267.618 - 8318.031: 3.2174% ( 107) 00:07:39.115 8318.031 - 8368.443: 4.1872% ( 126) 00:07:39.115 8368.443 - 8418.855: 5.4726% ( 167) 00:07:39.115 8418.855 - 8469.268: 7.0659% ( 207) 00:07:39.115 8469.268 - 8519.680: 8.9671% ( 247) 00:07:39.115 8519.680 - 8570.092: 11.0530% ( 271) 00:07:39.115 8570.092 - 8620.505: 13.2158% ( 281) 00:07:39.115 8620.505 - 8670.917: 15.8251% ( 339) 00:07:39.115 8670.917 - 8721.329: 18.5653% ( 356) 00:07:39.115 8721.329 - 8771.742: 21.4440% ( 374) 00:07:39.115 8771.742 - 8822.154: 24.4689% ( 393) 00:07:39.115 8822.154 - 8872.566: 27.7171% ( 422) 00:07:39.115 8872.566 - 8922.978: 31.0037% ( 427) 00:07:39.115 8922.978 - 8973.391: 34.2211% ( 418) 00:07:39.115 8973.391 - 9023.803: 37.5539% ( 433) 00:07:39.115 9023.803 - 9074.215: 40.9329% ( 439) 00:07:39.115 9074.215 - 9124.628: 44.3427% ( 443) 00:07:39.115 9124.628 - 9175.040: 47.5369% ( 415) 00:07:39.115 9175.040 - 9225.452: 50.8775% ( 434) 00:07:39.115 9225.452 - 9275.865: 53.8562% ( 387) 00:07:39.115 9275.865 - 9326.277: 56.6810% ( 367) 00:07:39.115 9326.277 - 9376.689: 59.4135% ( 355) 00:07:39.115 9376.689 - 9427.102: 61.8534% ( 317) 00:07:39.115 9427.102 - 9477.514: 64.0394% ( 284) 00:07:39.115 9477.514 - 9527.926: 65.9637% ( 250) 00:07:39.115 9527.926 - 9578.338: 67.6262% ( 216) 00:07:39.115 9578.338 - 9628.751: 69.0964% ( 191) 00:07:39.115 9628.751 - 9679.163: 70.4510% ( 176) 00:07:39.115 9679.163 - 9729.575: 71.6749% ( 159) 00:07:39.115 9729.575 - 9779.988: 72.7217% ( 136) 00:07:39.115 9779.988 - 9830.400: 73.6530% ( 121) 00:07:39.115 9830.400 - 9880.812: 74.4304% ( 101) 00:07:39.115 9880.812 - 9931.225: 75.0616% ( 82) 00:07:39.115 9931.225 - 9981.637: 75.6158% ( 72) 00:07:39.115 9981.637 - 10032.049: 76.0622% ( 58) 00:07:39.115 10032.049 - 10082.462: 76.4624% ( 52) 00:07:39.115 10082.462 - 10132.874: 76.7549% ( 38) 00:07:39.115 10132.874 - 10183.286: 77.0705% ( 41) 00:07:39.115 10183.286 - 10233.698: 77.3938% ( 42) 00:07:39.115 10233.698 - 10284.111: 77.8094% ( 54) 00:07:39.115 10284.111 - 10334.523: 78.1327% ( 42) 00:07:39.115 10334.523 - 10384.935: 78.4791% ( 45) 00:07:39.115 10384.935 - 10435.348: 78.8023% ( 42) 00:07:39.115 10435.348 - 10485.760: 79.1256% ( 42) 00:07:39.115 10485.760 - 10536.172: 79.4951% ( 48) 00:07:39.115 10536.172 - 10586.585: 79.8337% ( 44) 00:07:39.115 10586.585 - 10636.997: 80.2186% ( 50) 00:07:39.115 10636.997 - 10687.409: 80.6419% ( 55) 00:07:39.115 10687.409 - 10737.822: 81.0576% ( 54) 00:07:39.115 10737.822 - 10788.234: 81.5579% ( 65) 00:07:39.115 10788.234 - 10838.646: 82.0351% ( 62) 00:07:39.115 10838.646 - 10889.058: 82.5431% ( 66) 00:07:39.115 10889.058 - 10939.471: 83.0819% ( 70) 00:07:39.115 10939.471 - 10989.883: 83.5822% ( 65) 00:07:39.115 10989.883 - 11040.295: 84.1133% ( 69) 00:07:39.115 11040.295 - 11090.708: 84.5289% ( 54) 00:07:39.115 11090.708 - 11141.120: 85.0216% ( 64) 00:07:39.115 11141.120 - 11191.532: 85.4911% ( 61) 00:07:39.115 11191.532 - 11241.945: 85.9606% ( 61) 00:07:39.115 11241.945 - 11292.357: 86.4301% ( 61) 00:07:39.115 11292.357 - 11342.769: 86.8458% ( 54) 00:07:39.115 11342.769 - 11393.182: 87.2537% ( 53) 00:07:39.115 11393.182 - 11443.594: 87.6462% ( 51) 00:07:39.115 11443.594 - 11494.006: 88.0773% ( 56) 00:07:39.115 11494.006 - 11544.418: 88.4467% ( 48) 00:07:39.115 11544.418 - 11594.831: 88.7623% ( 41) 00:07:39.115 11594.831 - 11645.243: 89.0625% ( 39) 00:07:39.115 11645.243 - 11695.655: 89.3088% ( 32) 00:07:39.115 11695.655 - 11746.068: 89.6167% ( 40) 00:07:39.115 11746.068 - 11796.480: 89.9015% ( 37) 00:07:39.115 11796.480 - 11846.892: 90.1401% ( 31) 00:07:39.116 11846.892 - 11897.305: 90.3787% ( 31) 00:07:39.116 11897.305 - 11947.717: 90.5942% ( 28) 00:07:39.116 11947.717 - 11998.129: 90.7789% ( 24) 00:07:39.116 11998.129 - 12048.542: 90.9714% ( 25) 00:07:39.116 12048.542 - 12098.954: 91.1946% ( 29) 00:07:39.116 12098.954 - 12149.366: 91.4024% ( 27) 00:07:39.116 12149.366 - 12199.778: 91.6102% ( 27) 00:07:39.116 12199.778 - 12250.191: 91.8334% ( 29) 00:07:39.116 12250.191 - 12300.603: 92.0720% ( 31) 00:07:39.116 12300.603 - 12351.015: 92.2645% ( 25) 00:07:39.116 12351.015 - 12401.428: 92.4723% ( 27) 00:07:39.116 12401.428 - 12451.840: 92.6647% ( 25) 00:07:39.116 12451.840 - 12502.252: 92.8417% ( 23) 00:07:39.116 12502.252 - 12552.665: 93.0188% ( 23) 00:07:39.116 12552.665 - 12603.077: 93.2189% ( 26) 00:07:39.116 12603.077 - 12653.489: 93.3959% ( 23) 00:07:39.116 12653.489 - 12703.902: 93.5191% ( 16) 00:07:39.116 12703.902 - 12754.314: 93.6884% ( 22) 00:07:39.116 12754.314 - 12804.726: 93.8424% ( 20) 00:07:39.116 12804.726 - 12855.138: 94.0117% ( 22) 00:07:39.116 12855.138 - 12905.551: 94.1810% ( 22) 00:07:39.116 12905.551 - 13006.375: 94.6198% ( 57) 00:07:39.116 13006.375 - 13107.200: 94.9200% ( 39) 00:07:39.116 13107.200 - 13208.025: 95.1893% ( 35) 00:07:39.116 13208.025 - 13308.849: 95.4126% ( 29) 00:07:39.116 13308.849 - 13409.674: 95.5973% ( 24) 00:07:39.116 13409.674 - 13510.498: 95.8205% ( 29) 00:07:39.116 13510.498 - 13611.323: 96.0206% ( 26) 00:07:39.116 13611.323 - 13712.148: 96.2669% ( 32) 00:07:39.116 13712.148 - 13812.972: 96.5132% ( 32) 00:07:39.116 13812.972 - 13913.797: 96.6749% ( 21) 00:07:39.116 13913.797 - 14014.622: 96.8288% ( 20) 00:07:39.116 14014.622 - 14115.446: 97.0751% ( 32) 00:07:39.116 14115.446 - 14216.271: 97.3368% ( 34) 00:07:39.116 14216.271 - 14317.095: 97.5677% ( 30) 00:07:39.116 14317.095 - 14417.920: 97.7756% ( 27) 00:07:39.116 14417.920 - 14518.745: 97.9834% ( 27) 00:07:39.116 14518.745 - 14619.569: 98.1989% ( 28) 00:07:39.116 14619.569 - 14720.394: 98.3605% ( 21) 00:07:39.116 14720.394 - 14821.218: 98.5299% ( 22) 00:07:39.116 14821.218 - 14922.043: 98.6992% ( 22) 00:07:39.116 14922.043 - 15022.868: 98.8685% ( 22) 00:07:39.116 15022.868 - 15123.692: 98.9686% ( 13) 00:07:39.116 15123.692 - 15224.517: 99.0071% ( 5) 00:07:39.116 15224.517 - 15325.342: 99.0148% ( 1) 00:07:39.116 26819.348 - 27020.997: 99.0533% ( 5) 00:07:39.116 27020.997 - 27222.646: 99.1379% ( 11) 00:07:39.116 27222.646 - 27424.295: 99.2303% ( 12) 00:07:39.116 27424.295 - 27625.945: 99.3150% ( 11) 00:07:39.116 27625.945 - 27827.594: 99.4073% ( 12) 00:07:39.116 27827.594 - 28029.243: 99.4920% ( 11) 00:07:39.116 28029.243 - 28230.892: 99.5074% ( 2) 00:07:39.116 33070.474 - 33272.123: 99.5998% ( 12) 00:07:39.116 33272.123 - 33473.772: 99.6844% ( 11) 00:07:39.116 33473.772 - 33675.422: 99.7691% ( 11) 00:07:39.116 33675.422 - 33877.071: 99.8538% ( 11) 00:07:39.116 33877.071 - 34078.720: 99.9461% ( 12) 00:07:39.116 34078.720 - 34280.369: 100.0000% ( 7) 00:07:39.116 00:07:39.116 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:39.116 ============================================================================== 00:07:39.116 Range in us Cumulative IO count 00:07:39.116 4814.375 - 4839.582: 0.0231% ( 3) 00:07:39.116 4839.582 - 4864.788: 0.0847% ( 8) 00:07:39.116 4864.788 - 4889.994: 0.1155% ( 4) 00:07:39.116 4889.994 - 4915.200: 0.1385% ( 3) 00:07:39.116 4915.200 - 4940.406: 0.1539% ( 2) 00:07:39.116 4940.406 - 4965.612: 0.1693% ( 2) 00:07:39.116 4965.612 - 4990.818: 0.1847% ( 2) 00:07:39.116 4990.818 - 5016.025: 0.2001% ( 2) 00:07:39.116 5016.025 - 5041.231: 0.2078% ( 1) 00:07:39.116 5041.231 - 5066.437: 0.2155% ( 1) 00:07:39.116 5066.437 - 5091.643: 0.2386% ( 3) 00:07:39.116 5091.643 - 5116.849: 0.2617% ( 3) 00:07:39.116 5116.849 - 5142.055: 0.2848% ( 3) 00:07:39.116 5142.055 - 5167.262: 0.2925% ( 1) 00:07:39.116 5167.262 - 5192.468: 0.3079% ( 2) 00:07:39.116 5192.468 - 5217.674: 0.3233% ( 2) 00:07:39.116 5217.674 - 5242.880: 0.3387% ( 2) 00:07:39.116 5242.880 - 5268.086: 0.3541% ( 2) 00:07:39.116 5268.086 - 5293.292: 0.3695% ( 2) 00:07:39.116 5293.292 - 5318.498: 0.3925% ( 3) 00:07:39.116 5318.498 - 5343.705: 0.4079% ( 2) 00:07:39.116 5343.705 - 5368.911: 0.4233% ( 2) 00:07:39.116 5368.911 - 5394.117: 0.4387% ( 2) 00:07:39.116 5394.117 - 5419.323: 0.4541% ( 2) 00:07:39.116 5419.323 - 5444.529: 0.4695% ( 2) 00:07:39.116 5444.529 - 5469.735: 0.4849% ( 2) 00:07:39.116 5469.735 - 5494.942: 0.4926% ( 1) 00:07:39.116 7662.671 - 7713.083: 0.5542% ( 8) 00:07:39.116 7713.083 - 7763.495: 0.6081% ( 7) 00:07:39.116 7763.495 - 7813.908: 0.6312% ( 3) 00:07:39.116 7813.908 - 7864.320: 0.6466% ( 2) 00:07:39.116 7864.320 - 7914.732: 0.6850% ( 5) 00:07:39.116 7914.732 - 7965.145: 0.7466% ( 8) 00:07:39.116 7965.145 - 8015.557: 0.8544% ( 14) 00:07:39.116 8015.557 - 8065.969: 1.1469% ( 38) 00:07:39.116 8065.969 - 8116.382: 1.5240% ( 49) 00:07:39.116 8116.382 - 8166.794: 1.9858% ( 60) 00:07:39.116 8166.794 - 8217.206: 2.7325% ( 97) 00:07:39.116 8217.206 - 8267.618: 3.4714% ( 96) 00:07:39.116 8267.618 - 8318.031: 4.4181% ( 123) 00:07:39.116 8318.031 - 8368.443: 5.4957% ( 140) 00:07:39.116 8368.443 - 8418.855: 6.8273% ( 173) 00:07:39.116 8418.855 - 8469.268: 8.5899% ( 229) 00:07:39.116 8469.268 - 8519.680: 10.4603% ( 243) 00:07:39.116 8519.680 - 8570.092: 12.2999% ( 239) 00:07:39.116 8570.092 - 8620.505: 14.4320% ( 277) 00:07:39.116 8620.505 - 8670.917: 16.8026% ( 308) 00:07:39.116 8670.917 - 8721.329: 19.4966% ( 350) 00:07:39.116 8721.329 - 8771.742: 22.3753% ( 374) 00:07:39.116 8771.742 - 8822.154: 25.2232% ( 370) 00:07:39.116 8822.154 - 8872.566: 28.3559% ( 407) 00:07:39.116 8872.566 - 8922.978: 31.6656% ( 430) 00:07:39.116 8922.978 - 8973.391: 34.8599% ( 415) 00:07:39.116 8973.391 - 9023.803: 38.1619% ( 429) 00:07:39.116 9023.803 - 9074.215: 41.4871% ( 432) 00:07:39.116 9074.215 - 9124.628: 44.8738% ( 440) 00:07:39.116 9124.628 - 9175.040: 48.1219% ( 422) 00:07:39.116 9175.040 - 9225.452: 51.2007% ( 400) 00:07:39.116 9225.452 - 9275.865: 54.1487% ( 383) 00:07:39.116 9275.865 - 9326.277: 56.9119% ( 359) 00:07:39.116 9326.277 - 9376.689: 59.4443% ( 329) 00:07:39.116 9376.689 - 9427.102: 61.8765% ( 316) 00:07:39.116 9427.102 - 9477.514: 64.0856% ( 287) 00:07:39.116 9477.514 - 9527.926: 66.0406% ( 254) 00:07:39.116 9527.926 - 9578.338: 67.8648% ( 237) 00:07:39.116 9578.338 - 9628.751: 69.5043% ( 213) 00:07:39.116 9628.751 - 9679.163: 70.9437% ( 187) 00:07:39.116 9679.163 - 9729.575: 72.2137% ( 165) 00:07:39.116 9729.575 - 9779.988: 73.3913% ( 153) 00:07:39.116 9779.988 - 9830.400: 74.3381% ( 123) 00:07:39.116 9830.400 - 9880.812: 75.1308% ( 103) 00:07:39.116 9880.812 - 9931.225: 75.7851% ( 85) 00:07:39.116 9931.225 - 9981.637: 76.3316% ( 71) 00:07:39.116 9981.637 - 10032.049: 76.8473% ( 67) 00:07:39.116 10032.049 - 10082.462: 77.3399% ( 64) 00:07:39.116 10082.462 - 10132.874: 77.8094% ( 61) 00:07:39.116 10132.874 - 10183.286: 78.2943% ( 63) 00:07:39.116 10183.286 - 10233.698: 78.7100% ( 54) 00:07:39.116 10233.698 - 10284.111: 79.0871% ( 49) 00:07:39.116 10284.111 - 10334.523: 79.4797% ( 51) 00:07:39.116 10334.523 - 10384.935: 79.8107% ( 43) 00:07:39.116 10384.935 - 10435.348: 80.1878% ( 49) 00:07:39.116 10435.348 - 10485.760: 80.4880% ( 39) 00:07:39.116 10485.760 - 10536.172: 80.8190% ( 43) 00:07:39.116 10536.172 - 10586.585: 81.1268% ( 40) 00:07:39.116 10586.585 - 10636.997: 81.5117% ( 50) 00:07:39.116 10636.997 - 10687.409: 81.8504% ( 44) 00:07:39.116 10687.409 - 10737.822: 82.2198% ( 48) 00:07:39.116 10737.822 - 10788.234: 82.6355% ( 54) 00:07:39.116 10788.234 - 10838.646: 82.9972% ( 47) 00:07:39.116 10838.646 - 10889.058: 83.3051% ( 40) 00:07:39.116 10889.058 - 10939.471: 83.6669% ( 47) 00:07:39.116 10939.471 - 10989.883: 83.9517% ( 37) 00:07:39.116 10989.883 - 11040.295: 84.2595% ( 40) 00:07:39.116 11040.295 - 11090.708: 84.4982% ( 31) 00:07:39.116 11090.708 - 11141.120: 84.7752% ( 36) 00:07:39.116 11141.120 - 11191.532: 85.0369% ( 34) 00:07:39.116 11191.532 - 11241.945: 85.3294% ( 38) 00:07:39.116 11241.945 - 11292.357: 85.6219% ( 38) 00:07:39.116 11292.357 - 11342.769: 85.9067% ( 37) 00:07:39.116 11342.769 - 11393.182: 86.2069% ( 39) 00:07:39.116 11393.182 - 11443.594: 86.5610% ( 46) 00:07:39.116 11443.594 - 11494.006: 86.8919% ( 43) 00:07:39.116 11494.006 - 11544.418: 87.2537% ( 47) 00:07:39.116 11544.418 - 11594.831: 87.5924% ( 44) 00:07:39.116 11594.831 - 11645.243: 87.9002% ( 40) 00:07:39.116 11645.243 - 11695.655: 88.2004% ( 39) 00:07:39.116 11695.655 - 11746.068: 88.5160% ( 41) 00:07:39.116 11746.068 - 11796.480: 88.8085% ( 38) 00:07:39.116 11796.480 - 11846.892: 89.0625% ( 33) 00:07:39.116 11846.892 - 11897.305: 89.3088% ( 32) 00:07:39.116 11897.305 - 11947.717: 89.6090% ( 39) 00:07:39.116 11947.717 - 11998.129: 89.8707% ( 34) 00:07:39.116 11998.129 - 12048.542: 90.1401% ( 35) 00:07:39.116 12048.542 - 12098.954: 90.4172% ( 36) 00:07:39.116 12098.954 - 12149.366: 90.6943% ( 36) 00:07:39.116 12149.366 - 12199.778: 90.9868% ( 38) 00:07:39.116 12199.778 - 12250.191: 91.2869% ( 39) 00:07:39.116 12250.191 - 12300.603: 91.5948% ( 40) 00:07:39.116 12300.603 - 12351.015: 91.8719% ( 36) 00:07:39.116 12351.015 - 12401.428: 92.1336% ( 34) 00:07:39.116 12401.428 - 12451.840: 92.3722% ( 31) 00:07:39.116 12451.840 - 12502.252: 92.6108% ( 31) 00:07:39.116 12502.252 - 12552.665: 92.8648% ( 33) 00:07:39.117 12552.665 - 12603.077: 93.0650% ( 26) 00:07:39.117 12603.077 - 12653.489: 93.2959% ( 30) 00:07:39.117 12653.489 - 12703.902: 93.5037% ( 27) 00:07:39.117 12703.902 - 12754.314: 93.6961% ( 25) 00:07:39.117 12754.314 - 12804.726: 93.9039% ( 27) 00:07:39.117 12804.726 - 12855.138: 94.0887% ( 24) 00:07:39.117 12855.138 - 12905.551: 94.2888% ( 26) 00:07:39.117 12905.551 - 13006.375: 94.6736% ( 50) 00:07:39.117 13006.375 - 13107.200: 95.0277% ( 46) 00:07:39.117 13107.200 - 13208.025: 95.3125% ( 37) 00:07:39.117 13208.025 - 13308.849: 95.5434% ( 30) 00:07:39.117 13308.849 - 13409.674: 95.6897% ( 19) 00:07:39.117 13409.674 - 13510.498: 95.8590% ( 22) 00:07:39.117 13510.498 - 13611.323: 96.0206% ( 21) 00:07:39.117 13611.323 - 13712.148: 96.1438% ( 16) 00:07:39.117 13712.148 - 13812.972: 96.2977% ( 20) 00:07:39.117 13812.972 - 13913.797: 96.4825% ( 24) 00:07:39.117 13913.797 - 14014.622: 96.6749% ( 25) 00:07:39.117 14014.622 - 14115.446: 96.8519% ( 23) 00:07:39.117 14115.446 - 14216.271: 97.0366% ( 24) 00:07:39.117 14216.271 - 14317.095: 97.2137% ( 23) 00:07:39.117 14317.095 - 14417.920: 97.3984% ( 24) 00:07:39.117 14417.920 - 14518.745: 97.5754% ( 23) 00:07:39.117 14518.745 - 14619.569: 97.7448% ( 22) 00:07:39.117 14619.569 - 14720.394: 97.9218% ( 23) 00:07:39.117 14720.394 - 14821.218: 98.0526% ( 17) 00:07:39.117 14821.218 - 14922.043: 98.1450% ( 12) 00:07:39.117 14922.043 - 15022.868: 98.2682% ( 16) 00:07:39.117 15022.868 - 15123.692: 98.4144% ( 19) 00:07:39.117 15123.692 - 15224.517: 98.5760% ( 21) 00:07:39.117 15224.517 - 15325.342: 98.7069% ( 17) 00:07:39.117 15325.342 - 15426.166: 98.7839% ( 10) 00:07:39.117 15426.166 - 15526.991: 98.8147% ( 4) 00:07:39.117 15526.991 - 15627.815: 98.8608% ( 6) 00:07:39.117 15627.815 - 15728.640: 98.9070% ( 6) 00:07:39.117 15728.640 - 15829.465: 98.9532% ( 6) 00:07:39.117 15829.465 - 15930.289: 98.9994% ( 6) 00:07:39.117 15930.289 - 16031.114: 99.0148% ( 2) 00:07:39.117 27020.997 - 27222.646: 99.0764% ( 8) 00:07:39.117 27222.646 - 27424.295: 99.1533% ( 10) 00:07:39.117 27424.295 - 27625.945: 99.2457% ( 12) 00:07:39.117 27625.945 - 27827.594: 99.3304% ( 11) 00:07:39.117 27827.594 - 28029.243: 99.4227% ( 12) 00:07:39.117 28029.243 - 28230.892: 99.5074% ( 11) 00:07:39.117 33070.474 - 33272.123: 99.5767% ( 9) 00:07:39.117 33272.123 - 33473.772: 99.6613% ( 11) 00:07:39.117 33473.772 - 33675.422: 99.7460% ( 11) 00:07:39.117 33675.422 - 33877.071: 99.8307% ( 11) 00:07:39.117 33877.071 - 34078.720: 99.9076% ( 10) 00:07:39.117 34078.720 - 34280.369: 99.9923% ( 11) 00:07:39.117 34280.369 - 34482.018: 100.0000% ( 1) 00:07:39.117 00:07:39.117 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:39.117 ============================================================================== 00:07:39.117 Range in us Cumulative IO count 00:07:39.117 4385.871 - 4411.077: 0.0154% ( 2) 00:07:39.117 4411.077 - 4436.283: 0.0616% ( 6) 00:07:39.117 4436.283 - 4461.489: 0.0924% ( 4) 00:07:39.117 4461.489 - 4486.695: 0.1078% ( 2) 00:07:39.117 4511.902 - 4537.108: 0.1232% ( 2) 00:07:39.117 4537.108 - 4562.314: 0.1385% ( 2) 00:07:39.117 4562.314 - 4587.520: 0.1539% ( 2) 00:07:39.117 4587.520 - 4612.726: 0.1693% ( 2) 00:07:39.117 4612.726 - 4637.932: 0.1847% ( 2) 00:07:39.117 4637.932 - 4663.138: 0.2001% ( 2) 00:07:39.117 4663.138 - 4688.345: 0.2232% ( 3) 00:07:39.117 4688.345 - 4713.551: 0.2386% ( 2) 00:07:39.117 4713.551 - 4738.757: 0.2540% ( 2) 00:07:39.117 4738.757 - 4763.963: 0.2694% ( 2) 00:07:39.117 4763.963 - 4789.169: 0.2848% ( 2) 00:07:39.117 4789.169 - 4814.375: 0.3079% ( 3) 00:07:39.117 4814.375 - 4839.582: 0.3233% ( 2) 00:07:39.117 4839.582 - 4864.788: 0.3387% ( 2) 00:07:39.117 4864.788 - 4889.994: 0.3541% ( 2) 00:07:39.117 4889.994 - 4915.200: 0.3695% ( 2) 00:07:39.117 4915.200 - 4940.406: 0.3925% ( 3) 00:07:39.117 4940.406 - 4965.612: 0.4079% ( 2) 00:07:39.117 4965.612 - 4990.818: 0.4233% ( 2) 00:07:39.117 4990.818 - 5016.025: 0.4387% ( 2) 00:07:39.117 5016.025 - 5041.231: 0.4618% ( 3) 00:07:39.117 5041.231 - 5066.437: 0.4772% ( 2) 00:07:39.117 5066.437 - 5091.643: 0.4926% ( 2) 00:07:39.117 7208.960 - 7259.372: 0.5003% ( 1) 00:07:39.117 7259.372 - 7309.785: 0.5696% ( 9) 00:07:39.117 7309.785 - 7360.197: 0.6081% ( 5) 00:07:39.117 7360.197 - 7410.609: 0.6389% ( 4) 00:07:39.117 7410.609 - 7461.022: 0.6619% ( 3) 00:07:39.117 7461.022 - 7511.434: 0.7004% ( 5) 00:07:39.117 7511.434 - 7561.846: 0.7312% ( 4) 00:07:39.117 7561.846 - 7612.258: 0.7697% ( 5) 00:07:39.117 7612.258 - 7662.671: 0.8005% ( 4) 00:07:39.117 7662.671 - 7713.083: 0.8313% ( 4) 00:07:39.117 7713.083 - 7763.495: 0.8621% ( 4) 00:07:39.117 7763.495 - 7813.908: 0.8929% ( 4) 00:07:39.117 7813.908 - 7864.320: 0.9236% ( 4) 00:07:39.117 7864.320 - 7914.732: 0.9621% ( 5) 00:07:39.117 7914.732 - 7965.145: 1.0083% ( 6) 00:07:39.117 7965.145 - 8015.557: 1.1161% ( 14) 00:07:39.117 8015.557 - 8065.969: 1.3624% ( 32) 00:07:39.117 8065.969 - 8116.382: 1.6703% ( 40) 00:07:39.117 8116.382 - 8166.794: 1.9858% ( 41) 00:07:39.117 8166.794 - 8217.206: 2.5400% ( 72) 00:07:39.117 8217.206 - 8267.618: 3.2789% ( 96) 00:07:39.117 8267.618 - 8318.031: 4.1487% ( 113) 00:07:39.117 8318.031 - 8368.443: 5.1339% ( 128) 00:07:39.117 8368.443 - 8418.855: 6.2962% ( 151) 00:07:39.117 8418.855 - 8469.268: 7.8125% ( 197) 00:07:39.117 8469.268 - 8519.680: 9.4982% ( 219) 00:07:39.117 8519.680 - 8570.092: 11.6379% ( 278) 00:07:39.117 8570.092 - 8620.505: 14.0625% ( 315) 00:07:39.117 8620.505 - 8670.917: 16.4178% ( 306) 00:07:39.117 8670.917 - 8721.329: 19.0964% ( 348) 00:07:39.117 8721.329 - 8771.742: 21.9135% ( 366) 00:07:39.117 8771.742 - 8822.154: 24.8538% ( 382) 00:07:39.117 8822.154 - 8872.566: 27.9942% ( 408) 00:07:39.117 8872.566 - 8922.978: 31.5117% ( 457) 00:07:39.117 8922.978 - 8973.391: 34.9908% ( 452) 00:07:39.117 8973.391 - 9023.803: 38.3775% ( 440) 00:07:39.117 9023.803 - 9074.215: 41.8411% ( 450) 00:07:39.117 9074.215 - 9124.628: 45.1278% ( 427) 00:07:39.117 9124.628 - 9175.040: 48.1450% ( 392) 00:07:39.117 9175.040 - 9225.452: 51.2623% ( 405) 00:07:39.117 9225.452 - 9275.865: 54.2796% ( 392) 00:07:39.117 9275.865 - 9326.277: 57.0274% ( 357) 00:07:39.117 9326.277 - 9376.689: 59.6059% ( 335) 00:07:39.117 9376.689 - 9427.102: 62.0613% ( 319) 00:07:39.117 9427.102 - 9477.514: 64.3396% ( 296) 00:07:39.117 9477.514 - 9527.926: 66.3331% ( 259) 00:07:39.117 9527.926 - 9578.338: 68.2266% ( 246) 00:07:39.117 9578.338 - 9628.751: 69.9276% ( 221) 00:07:39.117 9628.751 - 9679.163: 71.4132% ( 193) 00:07:39.117 9679.163 - 9729.575: 72.7679% ( 176) 00:07:39.117 9729.575 - 9779.988: 73.8147% ( 136) 00:07:39.117 9779.988 - 9830.400: 74.6151% ( 104) 00:07:39.117 9830.400 - 9880.812: 75.3464% ( 95) 00:07:39.117 9880.812 - 9931.225: 75.8621% ( 67) 00:07:39.117 9931.225 - 9981.637: 76.4547% ( 77) 00:07:39.117 9981.637 - 10032.049: 77.0243% ( 74) 00:07:39.117 10032.049 - 10082.462: 77.5246% ( 65) 00:07:39.117 10082.462 - 10132.874: 78.0172% ( 64) 00:07:39.117 10132.874 - 10183.286: 78.5560% ( 70) 00:07:39.117 10183.286 - 10233.698: 79.1641% ( 79) 00:07:39.117 10233.698 - 10284.111: 79.6490% ( 63) 00:07:39.117 10284.111 - 10334.523: 80.0570% ( 53) 00:07:39.117 10334.523 - 10384.935: 80.4649% ( 53) 00:07:39.117 10384.935 - 10435.348: 80.8036% ( 44) 00:07:39.117 10435.348 - 10485.760: 81.1422% ( 44) 00:07:39.117 10485.760 - 10536.172: 81.4193% ( 36) 00:07:39.117 10536.172 - 10586.585: 81.7041% ( 37) 00:07:39.117 10586.585 - 10636.997: 81.9889% ( 37) 00:07:39.117 10636.997 - 10687.409: 82.2429% ( 33) 00:07:39.117 10687.409 - 10737.822: 82.4815% ( 31) 00:07:39.117 10737.822 - 10788.234: 82.7432% ( 34) 00:07:39.117 10788.234 - 10838.646: 83.0819% ( 44) 00:07:39.117 10838.646 - 10889.058: 83.3590% ( 36) 00:07:39.117 10889.058 - 10939.471: 83.5822% ( 29) 00:07:39.117 10939.471 - 10989.883: 83.8439% ( 34) 00:07:39.117 10989.883 - 11040.295: 84.0825% ( 31) 00:07:39.117 11040.295 - 11090.708: 84.2903% ( 27) 00:07:39.117 11090.708 - 11141.120: 84.5135% ( 29) 00:07:39.117 11141.120 - 11191.532: 84.7137% ( 26) 00:07:39.117 11191.532 - 11241.945: 84.9215% ( 27) 00:07:39.117 11241.945 - 11292.357: 85.2294% ( 40) 00:07:39.117 11292.357 - 11342.769: 85.5142% ( 37) 00:07:39.117 11342.769 - 11393.182: 85.8759% ( 47) 00:07:39.117 11393.182 - 11443.594: 86.1530% ( 36) 00:07:39.117 11443.594 - 11494.006: 86.3993% ( 32) 00:07:39.117 11494.006 - 11544.418: 86.6764% ( 36) 00:07:39.117 11544.418 - 11594.831: 86.9458% ( 35) 00:07:39.117 11594.831 - 11645.243: 87.2229% ( 36) 00:07:39.117 11645.243 - 11695.655: 87.5308% ( 40) 00:07:39.117 11695.655 - 11746.068: 87.8541% ( 42) 00:07:39.117 11746.068 - 11796.480: 88.1235% ( 35) 00:07:39.117 11796.480 - 11846.892: 88.3775% ( 33) 00:07:39.117 11846.892 - 11897.305: 88.6546% ( 36) 00:07:39.117 11897.305 - 11947.717: 89.0086% ( 46) 00:07:39.117 11947.717 - 11998.129: 89.3319% ( 42) 00:07:39.117 11998.129 - 12048.542: 89.6860% ( 46) 00:07:39.117 12048.542 - 12098.954: 90.0862% ( 52) 00:07:39.117 12098.954 - 12149.366: 90.4095% ( 42) 00:07:39.117 12149.366 - 12199.778: 90.7328% ( 42) 00:07:39.117 12199.778 - 12250.191: 91.0637% ( 43) 00:07:39.117 12250.191 - 12300.603: 91.3254% ( 34) 00:07:39.117 12300.603 - 12351.015: 91.6487% ( 42) 00:07:39.117 12351.015 - 12401.428: 91.9797% ( 43) 00:07:39.117 12401.428 - 12451.840: 92.2876% ( 40) 00:07:39.117 12451.840 - 12502.252: 92.6570% ( 48) 00:07:39.117 12502.252 - 12552.665: 92.9957% ( 44) 00:07:39.118 12552.665 - 12603.077: 93.2959% ( 39) 00:07:39.118 12603.077 - 12653.489: 93.6038% ( 40) 00:07:39.118 12653.489 - 12703.902: 93.8962% ( 38) 00:07:39.118 12703.902 - 12754.314: 94.1502% ( 33) 00:07:39.118 12754.314 - 12804.726: 94.3812% ( 30) 00:07:39.118 12804.726 - 12855.138: 94.5813% ( 26) 00:07:39.118 12855.138 - 12905.551: 94.7968% ( 28) 00:07:39.118 12905.551 - 13006.375: 95.1432% ( 45) 00:07:39.118 13006.375 - 13107.200: 95.3587% ( 28) 00:07:39.118 13107.200 - 13208.025: 95.5819% ( 29) 00:07:39.118 13208.025 - 13308.849: 95.8128% ( 30) 00:07:39.118 13308.849 - 13409.674: 95.9514% ( 18) 00:07:39.118 13409.674 - 13510.498: 96.1053% ( 20) 00:07:39.118 13510.498 - 13611.323: 96.2438% ( 18) 00:07:39.118 13611.323 - 13712.148: 96.3747% ( 17) 00:07:39.118 13712.148 - 13812.972: 96.4748% ( 13) 00:07:39.118 13812.972 - 13913.797: 96.5671% ( 12) 00:07:39.118 13913.797 - 14014.622: 96.7595% ( 25) 00:07:39.118 14014.622 - 14115.446: 96.8750% ( 15) 00:07:39.118 14115.446 - 14216.271: 97.0212% ( 19) 00:07:39.118 14216.271 - 14317.095: 97.1521% ( 17) 00:07:39.118 14317.095 - 14417.920: 97.2906% ( 18) 00:07:39.118 14417.920 - 14518.745: 97.4446% ( 20) 00:07:39.118 14518.745 - 14619.569: 97.6293% ( 24) 00:07:39.118 14619.569 - 14720.394: 97.8140% ( 24) 00:07:39.118 14720.394 - 14821.218: 98.0373% ( 29) 00:07:39.118 14821.218 - 14922.043: 98.1989% ( 21) 00:07:39.118 14922.043 - 15022.868: 98.3374% ( 18) 00:07:39.118 15022.868 - 15123.692: 98.4760% ( 18) 00:07:39.118 15123.692 - 15224.517: 98.6145% ( 18) 00:07:39.118 15224.517 - 15325.342: 98.6915% ( 10) 00:07:39.118 15325.342 - 15426.166: 98.7916% ( 13) 00:07:39.118 15426.166 - 15526.991: 98.8762% ( 11) 00:07:39.118 15526.991 - 15627.815: 98.9609% ( 11) 00:07:39.118 15627.815 - 15728.640: 99.0071% ( 6) 00:07:39.118 15728.640 - 15829.465: 99.0148% ( 1) 00:07:39.118 26416.049 - 26617.698: 99.0533% ( 5) 00:07:39.118 26617.698 - 26819.348: 99.1379% ( 11) 00:07:39.118 26819.348 - 27020.997: 99.2226% ( 11) 00:07:39.118 27020.997 - 27222.646: 99.3150% ( 12) 00:07:39.118 27222.646 - 27424.295: 99.3996% ( 11) 00:07:39.118 27424.295 - 27625.945: 99.4843% ( 11) 00:07:39.118 27625.945 - 27827.594: 99.5074% ( 3) 00:07:39.118 32465.526 - 32667.175: 99.5459% ( 5) 00:07:39.118 32667.175 - 32868.825: 99.6305% ( 11) 00:07:39.118 32868.825 - 33070.474: 99.7229% ( 12) 00:07:39.118 33070.474 - 33272.123: 99.8076% ( 11) 00:07:39.118 33272.123 - 33473.772: 99.8999% ( 12) 00:07:39.118 33473.772 - 33675.422: 99.9769% ( 10) 00:07:39.118 33675.422 - 33877.071: 100.0000% ( 3) 00:07:39.118 00:07:39.118 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:39.118 ============================================================================== 00:07:39.118 Range in us Cumulative IO count 00:07:39.118 3957.366 - 3982.572: 0.0154% ( 2) 00:07:39.118 3982.572 - 4007.778: 0.0308% ( 2) 00:07:39.118 4007.778 - 4032.985: 0.0539% ( 3) 00:07:39.118 4032.985 - 4058.191: 0.0693% ( 2) 00:07:39.118 4058.191 - 4083.397: 0.0847% ( 2) 00:07:39.118 4083.397 - 4108.603: 0.1001% ( 2) 00:07:39.118 4108.603 - 4133.809: 0.1155% ( 2) 00:07:39.118 4133.809 - 4159.015: 0.1308% ( 2) 00:07:39.118 4159.015 - 4184.222: 0.1539% ( 3) 00:07:39.118 4184.222 - 4209.428: 0.1693% ( 2) 00:07:39.118 4209.428 - 4234.634: 0.1847% ( 2) 00:07:39.118 4234.634 - 4259.840: 0.2001% ( 2) 00:07:39.118 4259.840 - 4285.046: 0.2155% ( 2) 00:07:39.118 4285.046 - 4310.252: 0.2309% ( 2) 00:07:39.118 4310.252 - 4335.458: 0.2540% ( 3) 00:07:39.118 4335.458 - 4360.665: 0.2694% ( 2) 00:07:39.118 4360.665 - 4385.871: 0.2848% ( 2) 00:07:39.118 4385.871 - 4411.077: 0.3002% ( 2) 00:07:39.118 4411.077 - 4436.283: 0.3156% ( 2) 00:07:39.118 4436.283 - 4461.489: 0.3387% ( 3) 00:07:39.118 4461.489 - 4486.695: 0.3541% ( 2) 00:07:39.118 4486.695 - 4511.902: 0.3695% ( 2) 00:07:39.118 4511.902 - 4537.108: 0.3849% ( 2) 00:07:39.118 4537.108 - 4562.314: 0.4002% ( 2) 00:07:39.118 4562.314 - 4587.520: 0.4156% ( 2) 00:07:39.118 4587.520 - 4612.726: 0.4387% ( 3) 00:07:39.118 4612.726 - 4637.932: 0.4541% ( 2) 00:07:39.118 4637.932 - 4663.138: 0.4695% ( 2) 00:07:39.118 4663.138 - 4688.345: 0.4849% ( 2) 00:07:39.118 4688.345 - 4713.551: 0.4926% ( 1) 00:07:39.118 6755.249 - 6805.662: 0.5234% ( 4) 00:07:39.118 6805.662 - 6856.074: 0.5542% ( 4) 00:07:39.118 6856.074 - 6906.486: 0.5927% ( 5) 00:07:39.118 6906.486 - 6956.898: 0.6235% ( 4) 00:07:39.118 6956.898 - 7007.311: 0.6542% ( 4) 00:07:39.118 7007.311 - 7057.723: 0.6927% ( 5) 00:07:39.118 7057.723 - 7108.135: 0.7235% ( 4) 00:07:39.118 7108.135 - 7158.548: 0.7543% ( 4) 00:07:39.118 7158.548 - 7208.960: 0.7928% ( 5) 00:07:39.118 7208.960 - 7259.372: 0.8236% ( 4) 00:07:39.118 7259.372 - 7309.785: 0.8544% ( 4) 00:07:39.118 7309.785 - 7360.197: 0.8852% ( 4) 00:07:39.118 7360.197 - 7410.609: 0.9236% ( 5) 00:07:39.118 7410.609 - 7461.022: 0.9621% ( 5) 00:07:39.118 7461.022 - 7511.434: 0.9852% ( 3) 00:07:39.118 7813.908 - 7864.320: 0.9929% ( 1) 00:07:39.118 7864.320 - 7914.732: 1.0314% ( 5) 00:07:39.118 7914.732 - 7965.145: 1.0930% ( 8) 00:07:39.118 7965.145 - 8015.557: 1.1469% ( 7) 00:07:39.118 8015.557 - 8065.969: 1.3008% ( 20) 00:07:39.118 8065.969 - 8116.382: 1.7087% ( 53) 00:07:39.118 8116.382 - 8166.794: 2.0782% ( 48) 00:07:39.118 8166.794 - 8217.206: 2.5015% ( 55) 00:07:39.118 8217.206 - 8267.618: 3.1943% ( 90) 00:07:39.118 8267.618 - 8318.031: 4.0333% ( 109) 00:07:39.118 8318.031 - 8368.443: 4.8799% ( 110) 00:07:39.118 8368.443 - 8418.855: 5.9883% ( 144) 00:07:39.118 8418.855 - 8469.268: 7.4430% ( 189) 00:07:39.118 8469.268 - 8519.680: 9.2134% ( 230) 00:07:39.118 8519.680 - 8570.092: 11.2223% ( 261) 00:07:39.118 8570.092 - 8620.505: 13.4467% ( 289) 00:07:39.118 8620.505 - 8670.917: 15.8482% ( 312) 00:07:39.118 8670.917 - 8721.329: 18.5345% ( 349) 00:07:39.118 8721.329 - 8771.742: 21.4901% ( 384) 00:07:39.118 8771.742 - 8822.154: 24.7845% ( 428) 00:07:39.118 8822.154 - 8872.566: 28.0634% ( 426) 00:07:39.118 8872.566 - 8922.978: 31.5502% ( 453) 00:07:39.118 8922.978 - 8973.391: 34.9523% ( 442) 00:07:39.118 8973.391 - 9023.803: 38.3005% ( 435) 00:07:39.118 9023.803 - 9074.215: 41.7873% ( 453) 00:07:39.118 9074.215 - 9124.628: 44.9123% ( 406) 00:07:39.118 9124.628 - 9175.040: 48.0911% ( 413) 00:07:39.118 9175.040 - 9225.452: 51.2161% ( 406) 00:07:39.118 9225.452 - 9275.865: 54.1718% ( 384) 00:07:39.118 9275.865 - 9326.277: 56.9812% ( 365) 00:07:39.118 9326.277 - 9376.689: 59.6983% ( 353) 00:07:39.118 9376.689 - 9427.102: 62.2383% ( 330) 00:07:39.118 9427.102 - 9477.514: 64.5705% ( 303) 00:07:39.118 9477.514 - 9527.926: 66.6333% ( 268) 00:07:39.118 9527.926 - 9578.338: 68.4421% ( 235) 00:07:39.118 9578.338 - 9628.751: 69.9507% ( 196) 00:07:39.118 9628.751 - 9679.163: 71.2515% ( 169) 00:07:39.118 9679.163 - 9729.575: 72.4369% ( 154) 00:07:39.118 9729.575 - 9779.988: 73.5222% ( 141) 00:07:39.118 9779.988 - 9830.400: 74.4150% ( 116) 00:07:39.118 9830.400 - 9880.812: 75.1385% ( 94) 00:07:39.118 9880.812 - 9931.225: 75.7928% ( 85) 00:07:39.118 9931.225 - 9981.637: 76.3547% ( 73) 00:07:39.118 9981.637 - 10032.049: 76.9320% ( 75) 00:07:39.118 10032.049 - 10082.462: 77.4708% ( 70) 00:07:39.118 10082.462 - 10132.874: 77.9711% ( 65) 00:07:39.118 10132.874 - 10183.286: 78.4175% ( 58) 00:07:39.118 10183.286 - 10233.698: 78.8793% ( 60) 00:07:39.118 10233.698 - 10284.111: 79.2796% ( 52) 00:07:39.118 10284.111 - 10334.523: 79.7491% ( 61) 00:07:39.118 10334.523 - 10384.935: 80.1878% ( 57) 00:07:39.118 10384.935 - 10435.348: 80.4957% ( 40) 00:07:39.118 10435.348 - 10485.760: 80.7959% ( 39) 00:07:39.118 10485.760 - 10536.172: 81.1268% ( 43) 00:07:39.118 10536.172 - 10586.585: 81.3962% ( 35) 00:07:39.118 10586.585 - 10636.997: 81.6502% ( 33) 00:07:39.118 10636.997 - 10687.409: 82.0043% ( 46) 00:07:39.118 10687.409 - 10737.822: 82.3507% ( 45) 00:07:39.119 10737.822 - 10788.234: 82.6047% ( 33) 00:07:39.119 10788.234 - 10838.646: 82.8972% ( 38) 00:07:39.119 10838.646 - 10889.058: 83.1435% ( 32) 00:07:39.119 10889.058 - 10939.471: 83.4052% ( 34) 00:07:39.119 10939.471 - 10989.883: 83.6900% ( 37) 00:07:39.119 10989.883 - 11040.295: 83.9517% ( 34) 00:07:39.119 11040.295 - 11090.708: 84.2057% ( 33) 00:07:39.119 11090.708 - 11141.120: 84.4597% ( 33) 00:07:39.119 11141.120 - 11191.532: 84.6829% ( 29) 00:07:39.119 11191.532 - 11241.945: 84.9061% ( 29) 00:07:39.119 11241.945 - 11292.357: 85.0754% ( 22) 00:07:39.119 11292.357 - 11342.769: 85.2448% ( 22) 00:07:39.119 11342.769 - 11393.182: 85.4141% ( 22) 00:07:39.119 11393.182 - 11443.594: 85.6373% ( 29) 00:07:39.119 11443.594 - 11494.006: 85.9144% ( 36) 00:07:39.119 11494.006 - 11544.418: 86.2223% ( 40) 00:07:39.119 11544.418 - 11594.831: 86.5456% ( 42) 00:07:39.119 11594.831 - 11645.243: 86.8842% ( 44) 00:07:39.119 11645.243 - 11695.655: 87.2229% ( 44) 00:07:39.119 11695.655 - 11746.068: 87.5847% ( 47) 00:07:39.119 11746.068 - 11796.480: 87.9695% ( 50) 00:07:39.119 11796.480 - 11846.892: 88.3159% ( 45) 00:07:39.119 11846.892 - 11897.305: 88.6623% ( 45) 00:07:39.119 11897.305 - 11947.717: 89.0317% ( 48) 00:07:39.119 11947.717 - 11998.129: 89.3935% ( 47) 00:07:39.119 11998.129 - 12048.542: 89.7398% ( 45) 00:07:39.119 12048.542 - 12098.954: 90.0554% ( 41) 00:07:39.119 12098.954 - 12149.366: 90.3787% ( 42) 00:07:39.119 12149.366 - 12199.778: 90.7328% ( 46) 00:07:39.119 12199.778 - 12250.191: 91.1176% ( 50) 00:07:39.119 12250.191 - 12300.603: 91.4794% ( 47) 00:07:39.119 12300.603 - 12351.015: 91.8873% ( 53) 00:07:39.119 12351.015 - 12401.428: 92.2337% ( 45) 00:07:39.119 12401.428 - 12451.840: 92.5647% ( 43) 00:07:39.119 12451.840 - 12502.252: 92.8494% ( 37) 00:07:39.119 12502.252 - 12552.665: 93.1034% ( 33) 00:07:39.119 12552.665 - 12603.077: 93.3344% ( 30) 00:07:39.119 12603.077 - 12653.489: 93.5345% ( 26) 00:07:39.119 12653.489 - 12703.902: 93.7577% ( 29) 00:07:39.119 12703.902 - 12754.314: 93.9886% ( 30) 00:07:39.119 12754.314 - 12804.726: 94.2118% ( 29) 00:07:39.119 12804.726 - 12855.138: 94.3966% ( 24) 00:07:39.119 12855.138 - 12905.551: 94.5505% ( 20) 00:07:39.119 12905.551 - 13006.375: 94.8969% ( 45) 00:07:39.119 13006.375 - 13107.200: 95.2047% ( 40) 00:07:39.119 13107.200 - 13208.025: 95.5511% ( 45) 00:07:39.119 13208.025 - 13308.849: 95.8282% ( 36) 00:07:39.119 13308.849 - 13409.674: 96.1592% ( 43) 00:07:39.119 13409.674 - 13510.498: 96.4209% ( 34) 00:07:39.119 13510.498 - 13611.323: 96.6364% ( 28) 00:07:39.119 13611.323 - 13712.148: 96.8288% ( 25) 00:07:39.119 13712.148 - 13812.972: 97.0597% ( 30) 00:07:39.119 13812.972 - 13913.797: 97.2829% ( 29) 00:07:39.119 13913.797 - 14014.622: 97.4754% ( 25) 00:07:39.119 14014.622 - 14115.446: 97.6678% ( 25) 00:07:39.119 14115.446 - 14216.271: 97.7679% ( 13) 00:07:39.119 14216.271 - 14317.095: 97.8217% ( 7) 00:07:39.119 14317.095 - 14417.920: 97.8679% ( 6) 00:07:39.119 14417.920 - 14518.745: 97.9295% ( 8) 00:07:39.119 14518.745 - 14619.569: 98.0219% ( 12) 00:07:39.119 14619.569 - 14720.394: 98.1065% ( 11) 00:07:39.119 14720.394 - 14821.218: 98.1835% ( 10) 00:07:39.119 14821.218 - 14922.043: 98.2682% ( 11) 00:07:39.119 14922.043 - 15022.868: 98.3605% ( 12) 00:07:39.119 15022.868 - 15123.692: 98.4452% ( 11) 00:07:39.119 15123.692 - 15224.517: 98.5376% ( 12) 00:07:39.119 15224.517 - 15325.342: 98.6222% ( 11) 00:07:39.119 15325.342 - 15426.166: 98.7146% ( 12) 00:07:39.119 15426.166 - 15526.991: 98.7916% ( 10) 00:07:39.119 15526.991 - 15627.815: 98.8762% ( 11) 00:07:39.119 15627.815 - 15728.640: 98.9532% ( 10) 00:07:39.119 15728.640 - 15829.465: 98.9994% ( 6) 00:07:39.119 15829.465 - 15930.289: 99.0148% ( 2) 00:07:39.119 25811.102 - 26012.751: 99.0687% ( 7) 00:07:39.119 26012.751 - 26214.400: 99.1533% ( 11) 00:07:39.119 26214.400 - 26416.049: 99.2457% ( 12) 00:07:39.119 26416.049 - 26617.698: 99.3304% ( 11) 00:07:39.119 26617.698 - 26819.348: 99.4227% ( 12) 00:07:39.119 26819.348 - 27020.997: 99.4997% ( 10) 00:07:39.119 27020.997 - 27222.646: 99.5074% ( 1) 00:07:39.119 31860.578 - 32062.228: 99.5305% ( 3) 00:07:39.119 32062.228 - 32263.877: 99.6151% ( 11) 00:07:39.119 32263.877 - 32465.526: 99.7075% ( 12) 00:07:39.119 32465.526 - 32667.175: 99.7922% ( 11) 00:07:39.119 32667.175 - 32868.825: 99.8768% ( 11) 00:07:39.119 32868.825 - 33070.474: 99.9692% ( 12) 00:07:39.119 33070.474 - 33272.123: 100.0000% ( 4) 00:07:39.119 00:07:39.119 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:39.119 ============================================================================== 00:07:39.119 Range in us Cumulative IO count 00:07:39.119 3654.892 - 3680.098: 0.0077% ( 1) 00:07:39.119 3680.098 - 3705.305: 0.0613% ( 7) 00:07:39.119 3705.305 - 3730.511: 0.0996% ( 5) 00:07:39.119 3730.511 - 3755.717: 0.1072% ( 1) 00:07:39.119 3755.717 - 3780.923: 0.1379% ( 4) 00:07:39.119 3780.923 - 3806.129: 0.1532% ( 2) 00:07:39.119 3806.129 - 3831.335: 0.1685% ( 2) 00:07:39.119 3831.335 - 3856.542: 0.1762% ( 1) 00:07:39.119 3881.748 - 3906.954: 0.1915% ( 2) 00:07:39.119 3906.954 - 3932.160: 0.2068% ( 2) 00:07:39.119 3932.160 - 3957.366: 0.2221% ( 2) 00:07:39.119 3957.366 - 3982.572: 0.2374% ( 2) 00:07:39.119 3982.572 - 4007.778: 0.2451% ( 1) 00:07:39.119 4007.778 - 4032.985: 0.2604% ( 2) 00:07:39.119 4032.985 - 4058.191: 0.2757% ( 2) 00:07:39.119 4058.191 - 4083.397: 0.2911% ( 2) 00:07:39.119 4083.397 - 4108.603: 0.3064% ( 2) 00:07:39.119 4108.603 - 4133.809: 0.3217% ( 2) 00:07:39.119 4133.809 - 4159.015: 0.3370% ( 2) 00:07:39.119 4159.015 - 4184.222: 0.3523% ( 2) 00:07:39.119 4184.222 - 4209.428: 0.3676% ( 2) 00:07:39.119 4209.428 - 4234.634: 0.3830% ( 2) 00:07:39.119 4234.634 - 4259.840: 0.3906% ( 1) 00:07:39.119 4259.840 - 4285.046: 0.4059% ( 2) 00:07:39.119 4285.046 - 4310.252: 0.4213% ( 2) 00:07:39.119 4310.252 - 4335.458: 0.4442% ( 3) 00:07:39.119 4335.458 - 4360.665: 0.4596% ( 2) 00:07:39.119 4360.665 - 4385.871: 0.4749% ( 2) 00:07:39.119 4385.871 - 4411.077: 0.4902% ( 2) 00:07:39.119 6503.188 - 6553.600: 0.5744% ( 11) 00:07:39.119 6553.600 - 6604.012: 0.6127% ( 5) 00:07:39.119 6604.012 - 6654.425: 0.6281% ( 2) 00:07:39.119 6654.425 - 6704.837: 0.6587% ( 4) 00:07:39.119 6704.837 - 6755.249: 0.6970% ( 5) 00:07:39.119 6755.249 - 6805.662: 0.7353% ( 5) 00:07:39.119 6805.662 - 6856.074: 0.7659% ( 4) 00:07:39.119 6856.074 - 6906.486: 0.7966% ( 4) 00:07:39.119 6906.486 - 6956.898: 0.8272% ( 4) 00:07:39.119 6956.898 - 7007.311: 0.8578% ( 4) 00:07:39.119 7007.311 - 7057.723: 0.8961% ( 5) 00:07:39.119 7057.723 - 7108.135: 0.9268% ( 4) 00:07:39.119 7108.135 - 7158.548: 0.9651% ( 5) 00:07:39.119 7158.548 - 7208.960: 0.9804% ( 2) 00:07:39.119 7864.320 - 7914.732: 1.0034% ( 3) 00:07:39.119 7914.732 - 7965.145: 1.0646% ( 8) 00:07:39.119 7965.145 - 8015.557: 1.1489% ( 11) 00:07:39.119 8015.557 - 8065.969: 1.3710% ( 29) 00:07:39.119 8065.969 - 8116.382: 1.5472% ( 23) 00:07:39.119 8116.382 - 8166.794: 1.9072% ( 47) 00:07:39.119 8166.794 - 8217.206: 2.4050% ( 65) 00:07:39.119 8217.206 - 8267.618: 2.9565% ( 72) 00:07:39.119 8267.618 - 8318.031: 3.7760% ( 107) 00:07:39.119 8318.031 - 8368.443: 4.7258% ( 124) 00:07:39.119 8368.443 - 8418.855: 5.8670% ( 149) 00:07:39.119 8418.855 - 8469.268: 7.2074% ( 175) 00:07:39.119 8469.268 - 8519.680: 9.1146% ( 249) 00:07:39.119 8519.680 - 8570.092: 11.2822% ( 283) 00:07:39.119 8570.092 - 8620.505: 13.6183% ( 305) 00:07:39.119 8620.505 - 8670.917: 16.1152% ( 326) 00:07:39.119 8670.917 - 8721.329: 18.6581% ( 332) 00:07:39.119 8721.329 - 8771.742: 21.2163% ( 334) 00:07:39.119 8771.742 - 8822.154: 24.0656% ( 372) 00:07:39.119 8822.154 - 8872.566: 27.0833% ( 394) 00:07:39.119 8872.566 - 8922.978: 30.3615% ( 428) 00:07:39.119 8922.978 - 8973.391: 33.8388% ( 454) 00:07:39.119 8973.391 - 9023.803: 37.5153% ( 480) 00:07:39.119 9023.803 - 9074.215: 40.9850% ( 453) 00:07:39.119 9074.215 - 9124.628: 44.4776% ( 456) 00:07:39.119 9124.628 - 9175.040: 47.6945% ( 420) 00:07:39.119 9175.040 - 9225.452: 51.0110% ( 433) 00:07:39.119 9225.452 - 9275.865: 54.0211% ( 393) 00:07:39.119 9275.865 - 9326.277: 56.9087% ( 377) 00:07:39.119 9326.277 - 9376.689: 59.6431% ( 357) 00:07:39.119 9376.689 - 9427.102: 62.2243% ( 337) 00:07:39.119 9427.102 - 9477.514: 64.3995% ( 284) 00:07:39.119 9477.514 - 9527.926: 66.3833% ( 259) 00:07:39.119 9527.926 - 9578.338: 68.0453% ( 217) 00:07:39.119 9578.338 - 9628.751: 69.6308% ( 207) 00:07:39.119 9628.751 - 9679.163: 71.0784% ( 189) 00:07:39.119 9679.163 - 9729.575: 72.3192% ( 162) 00:07:39.119 9729.575 - 9779.988: 73.4835% ( 152) 00:07:39.119 9779.988 - 9830.400: 74.3183% ( 109) 00:07:39.119 9830.400 - 9880.812: 75.0919% ( 101) 00:07:39.119 9880.812 - 9931.225: 75.6664% ( 75) 00:07:39.119 9931.225 - 9981.637: 76.2178% ( 72) 00:07:39.119 9981.637 - 10032.049: 76.7080% ( 64) 00:07:39.119 10032.049 - 10082.462: 77.2365% ( 69) 00:07:39.119 10082.462 - 10132.874: 77.6654% ( 56) 00:07:39.119 10132.874 - 10183.286: 78.1327% ( 61) 00:07:39.119 10183.286 - 10233.698: 78.5692% ( 57) 00:07:39.119 10233.698 - 10284.111: 78.9905% ( 55) 00:07:39.119 10284.111 - 10334.523: 79.4041% ( 54) 00:07:39.119 10334.523 - 10384.935: 79.7564% ( 46) 00:07:39.119 10384.935 - 10435.348: 80.0475% ( 38) 00:07:39.119 10435.348 - 10485.760: 80.3768% ( 43) 00:07:39.120 10485.760 - 10536.172: 80.6679% ( 38) 00:07:39.120 10536.172 - 10586.585: 80.9896% ( 42) 00:07:39.120 10586.585 - 10636.997: 81.2883% ( 39) 00:07:39.120 10636.997 - 10687.409: 81.6100% ( 42) 00:07:39.120 10687.409 - 10737.822: 81.9240% ( 41) 00:07:39.120 10737.822 - 10788.234: 82.2993% ( 49) 00:07:39.120 10788.234 - 10838.646: 82.5827% ( 37) 00:07:39.120 10838.646 - 10889.058: 82.8814% ( 39) 00:07:39.120 10889.058 - 10939.471: 83.1265% ( 32) 00:07:39.120 10939.471 - 10989.883: 83.4252% ( 39) 00:07:39.120 10989.883 - 11040.295: 83.7163% ( 38) 00:07:39.120 11040.295 - 11090.708: 84.0763% ( 47) 00:07:39.120 11090.708 - 11141.120: 84.3137% ( 31) 00:07:39.120 11141.120 - 11191.532: 84.5971% ( 37) 00:07:39.120 11191.532 - 11241.945: 84.9341% ( 44) 00:07:39.120 11241.945 - 11292.357: 85.2558% ( 42) 00:07:39.120 11292.357 - 11342.769: 85.6388% ( 50) 00:07:39.120 11342.769 - 11393.182: 86.0524% ( 54) 00:07:39.120 11393.182 - 11443.594: 86.4124% ( 47) 00:07:39.120 11443.594 - 11494.006: 86.8260% ( 54) 00:07:39.120 11494.006 - 11544.418: 87.1706% ( 45) 00:07:39.120 11544.418 - 11594.831: 87.5536% ( 50) 00:07:39.120 11594.831 - 11645.243: 87.9213% ( 48) 00:07:39.120 11645.243 - 11695.655: 88.2889% ( 48) 00:07:39.120 11695.655 - 11746.068: 88.6412% ( 46) 00:07:39.120 11746.068 - 11796.480: 88.9476% ( 40) 00:07:39.120 11796.480 - 11846.892: 89.2770% ( 43) 00:07:39.120 11846.892 - 11897.305: 89.5680% ( 38) 00:07:39.120 11897.305 - 11947.717: 89.8744% ( 40) 00:07:39.120 11947.717 - 11998.129: 90.2114% ( 44) 00:07:39.120 11998.129 - 12048.542: 90.5944% ( 50) 00:07:39.120 12048.542 - 12098.954: 90.9084% ( 41) 00:07:39.120 12098.954 - 12149.366: 91.1688% ( 34) 00:07:39.120 12149.366 - 12199.778: 91.4139% ( 32) 00:07:39.120 12199.778 - 12250.191: 91.6590% ( 32) 00:07:39.120 12250.191 - 12300.603: 91.8964% ( 31) 00:07:39.120 12300.603 - 12351.015: 92.0726% ( 23) 00:07:39.120 12351.015 - 12401.428: 92.2641% ( 25) 00:07:39.120 12401.428 - 12451.840: 92.4479% ( 24) 00:07:39.120 12451.840 - 12502.252: 92.6164% ( 22) 00:07:39.120 12502.252 - 12552.665: 92.7926% ( 23) 00:07:39.120 12552.665 - 12603.077: 92.9534% ( 21) 00:07:39.120 12603.077 - 12653.489: 93.1143% ( 21) 00:07:39.120 12653.489 - 12703.902: 93.3211% ( 27) 00:07:39.120 12703.902 - 12754.314: 93.5585% ( 31) 00:07:39.120 12754.314 - 12804.726: 93.7960% ( 31) 00:07:39.120 12804.726 - 12855.138: 94.0104% ( 28) 00:07:39.120 12855.138 - 12905.551: 94.2555% ( 32) 00:07:39.120 12905.551 - 13006.375: 94.6155% ( 47) 00:07:39.120 13006.375 - 13107.200: 94.9602% ( 45) 00:07:39.120 13107.200 - 13208.025: 95.2359% ( 36) 00:07:39.120 13208.025 - 13308.849: 95.5959% ( 47) 00:07:39.120 13308.849 - 13409.674: 95.9176% ( 42) 00:07:39.120 13409.674 - 13510.498: 96.2776% ( 47) 00:07:39.120 13510.498 - 13611.323: 96.6529% ( 49) 00:07:39.120 13611.323 - 13712.148: 96.9133% ( 34) 00:07:39.120 13712.148 - 13812.972: 97.0971% ( 24) 00:07:39.120 13812.972 - 13913.797: 97.2963% ( 26) 00:07:39.120 13913.797 - 14014.622: 97.4648% ( 22) 00:07:39.120 14014.622 - 14115.446: 97.6409% ( 23) 00:07:39.120 14115.446 - 14216.271: 97.7941% ( 20) 00:07:39.120 14216.271 - 14317.095: 97.8860% ( 12) 00:07:39.120 14317.095 - 14417.920: 97.9473% ( 8) 00:07:39.120 14417.920 - 14518.745: 97.9933% ( 6) 00:07:39.120 14518.745 - 14619.569: 98.0622% ( 9) 00:07:39.120 14619.569 - 14720.394: 98.1235% ( 8) 00:07:39.120 14720.394 - 14821.218: 98.2077% ( 11) 00:07:39.120 14821.218 - 14922.043: 98.2843% ( 10) 00:07:39.120 14922.043 - 15022.868: 98.3762% ( 12) 00:07:39.120 15022.868 - 15123.692: 98.4452% ( 9) 00:07:39.120 15123.692 - 15224.517: 98.5447% ( 13) 00:07:39.120 15224.517 - 15325.342: 98.6213% ( 10) 00:07:39.120 15325.342 - 15426.166: 98.7132% ( 12) 00:07:39.120 15426.166 - 15526.991: 98.7975% ( 11) 00:07:39.120 15526.991 - 15627.815: 98.8817% ( 11) 00:07:39.120 15627.815 - 15728.640: 98.9660% ( 11) 00:07:39.120 15728.640 - 15829.465: 99.0119% ( 6) 00:07:39.120 15829.465 - 15930.289: 99.0196% ( 1) 00:07:39.120 19963.274 - 20064.098: 99.0579% ( 5) 00:07:39.120 20064.098 - 20164.923: 99.1039% ( 6) 00:07:39.120 20164.923 - 20265.748: 99.1422% ( 5) 00:07:39.120 20265.748 - 20366.572: 99.1805% ( 5) 00:07:39.120 20366.572 - 20467.397: 99.2264% ( 6) 00:07:39.120 20467.397 - 20568.222: 99.2570% ( 4) 00:07:39.120 20568.222 - 20669.046: 99.3030% ( 6) 00:07:39.120 20669.046 - 20769.871: 99.3490% ( 6) 00:07:39.120 20769.871 - 20870.695: 99.3949% ( 6) 00:07:39.120 20870.695 - 20971.520: 99.4332% ( 5) 00:07:39.120 20971.520 - 21072.345: 99.4792% ( 6) 00:07:39.120 21072.345 - 21173.169: 99.5098% ( 4) 00:07:39.120 26012.751 - 26214.400: 99.5787% ( 9) 00:07:39.120 26214.400 - 26416.049: 99.6630% ( 11) 00:07:39.120 26416.049 - 26617.698: 99.7549% ( 12) 00:07:39.120 26617.698 - 26819.348: 99.8315% ( 10) 00:07:39.120 26819.348 - 27020.997: 99.9234% ( 12) 00:07:39.120 27020.997 - 27222.646: 100.0000% ( 10) 00:07:39.120 00:07:39.120 09:25:06 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:07:40.505 Initializing NVMe Controllers 00:07:40.505 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:40.505 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:40.505 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:40.505 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:40.505 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:40.505 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:40.505 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:40.505 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:40.505 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:40.505 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:40.505 Initialization complete. Launching workers. 00:07:40.505 ======================================================== 00:07:40.505 Latency(us) 00:07:40.505 Device Information : IOPS MiB/s Average min max 00:07:40.505 PCIE (0000:00:10.0) NSID 1 from core 0: 14029.10 164.40 9127.84 6972.13 30918.73 00:07:40.505 PCIE (0000:00:11.0) NSID 1 from core 0: 14029.10 164.40 9119.14 6534.81 30547.68 00:07:40.505 PCIE (0000:00:13.0) NSID 1 from core 0: 14029.10 164.40 9109.82 5344.82 30566.31 00:07:40.505 PCIE (0000:00:12.0) NSID 1 from core 0: 14029.10 164.40 9100.25 5032.34 29899.32 00:07:40.505 PCIE (0000:00:12.0) NSID 2 from core 0: 14029.10 164.40 9090.85 4353.19 29514.29 00:07:40.505 PCIE (0000:00:12.0) NSID 3 from core 0: 14029.10 164.40 9081.39 4429.21 29189.37 00:07:40.505 ======================================================== 00:07:40.505 Total : 84174.61 986.42 9104.88 4353.19 30918.73 00:07:40.505 00:07:40.505 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:40.505 ================================================================================= 00:07:40.505 1.00000% : 7561.846us 00:07:40.505 10.00000% : 7914.732us 00:07:40.505 25.00000% : 8267.618us 00:07:40.505 50.00000% : 8721.329us 00:07:40.505 75.00000% : 9275.865us 00:07:40.505 90.00000% : 10536.172us 00:07:40.505 95.00000% : 12098.954us 00:07:40.505 98.00000% : 13812.972us 00:07:40.505 99.00000% : 15930.289us 00:07:40.505 99.50000% : 22080.591us 00:07:40.505 99.90000% : 30650.683us 00:07:40.505 99.99000% : 31053.982us 00:07:40.505 99.99900% : 31053.982us 00:07:40.505 99.99990% : 31053.982us 00:07:40.505 99.99999% : 31053.982us 00:07:40.505 00:07:40.505 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:40.505 ================================================================================= 00:07:40.505 1.00000% : 7662.671us 00:07:40.505 10.00000% : 8015.557us 00:07:40.505 25.00000% : 8267.618us 00:07:40.505 50.00000% : 8670.917us 00:07:40.505 75.00000% : 9225.452us 00:07:40.505 90.00000% : 10435.348us 00:07:40.505 95.00000% : 12098.954us 00:07:40.505 98.00000% : 13611.323us 00:07:40.505 99.00000% : 15728.640us 00:07:40.505 99.50000% : 22786.363us 00:07:40.505 99.90000% : 30247.385us 00:07:40.505 99.99000% : 30650.683us 00:07:40.505 99.99900% : 30650.683us 00:07:40.505 99.99990% : 30650.683us 00:07:40.505 99.99999% : 30650.683us 00:07:40.505 00:07:40.505 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:40.505 ================================================================================= 00:07:40.505 1.00000% : 7662.671us 00:07:40.505 10.00000% : 8015.557us 00:07:40.505 25.00000% : 8267.618us 00:07:40.505 50.00000% : 8670.917us 00:07:40.505 75.00000% : 9225.452us 00:07:40.505 90.00000% : 10485.760us 00:07:40.505 95.00000% : 12098.954us 00:07:40.505 98.00000% : 13712.148us 00:07:40.505 99.00000% : 15829.465us 00:07:40.505 99.50000% : 22786.363us 00:07:40.506 99.90000% : 30247.385us 00:07:40.506 99.99000% : 30650.683us 00:07:40.506 99.99900% : 30650.683us 00:07:40.506 99.99990% : 30650.683us 00:07:40.506 99.99999% : 30650.683us 00:07:40.506 00:07:40.506 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:40.506 ================================================================================= 00:07:40.506 1.00000% : 7662.671us 00:07:40.506 10.00000% : 8015.557us 00:07:40.506 25.00000% : 8267.618us 00:07:40.506 50.00000% : 8670.917us 00:07:40.506 75.00000% : 9275.865us 00:07:40.506 90.00000% : 10485.760us 00:07:40.506 95.00000% : 12149.366us 00:07:40.506 98.00000% : 14014.622us 00:07:40.506 99.00000% : 15224.517us 00:07:40.506 99.50000% : 22887.188us 00:07:40.506 99.90000% : 29642.437us 00:07:40.506 99.99000% : 30045.735us 00:07:40.506 99.99900% : 30045.735us 00:07:40.506 99.99990% : 30045.735us 00:07:40.506 99.99999% : 30045.735us 00:07:40.506 00:07:40.506 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:40.506 ================================================================================= 00:07:40.506 1.00000% : 7713.083us 00:07:40.506 10.00000% : 8015.557us 00:07:40.506 25.00000% : 8267.618us 00:07:40.506 50.00000% : 8670.917us 00:07:40.506 75.00000% : 9275.865us 00:07:40.506 90.00000% : 10485.760us 00:07:40.506 95.00000% : 12048.542us 00:07:40.506 98.00000% : 13913.797us 00:07:40.506 99.00000% : 15526.991us 00:07:40.506 99.50000% : 22584.714us 00:07:40.506 99.90000% : 29239.138us 00:07:40.506 99.99000% : 29642.437us 00:07:40.506 99.99900% : 29642.437us 00:07:40.506 99.99990% : 29642.437us 00:07:40.506 99.99999% : 29642.437us 00:07:40.506 00:07:40.506 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:40.506 ================================================================================= 00:07:40.506 1.00000% : 7612.258us 00:07:40.506 10.00000% : 8015.557us 00:07:40.506 25.00000% : 8267.618us 00:07:40.506 50.00000% : 8620.505us 00:07:40.506 75.00000% : 9225.452us 00:07:40.506 90.00000% : 10435.348us 00:07:40.506 95.00000% : 11947.717us 00:07:40.506 98.00000% : 13409.674us 00:07:40.506 99.00000% : 15829.465us 00:07:40.506 99.50000% : 22282.240us 00:07:40.506 99.90000% : 29037.489us 00:07:40.506 99.99000% : 29239.138us 00:07:40.506 99.99900% : 29239.138us 00:07:40.506 99.99990% : 29239.138us 00:07:40.506 99.99999% : 29239.138us 00:07:40.506 00:07:40.506 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:40.506 ============================================================================== 00:07:40.506 Range in us Cumulative IO count 00:07:40.506 6956.898 - 7007.311: 0.0213% ( 3) 00:07:40.506 7007.311 - 7057.723: 0.0568% ( 5) 00:07:40.506 7057.723 - 7108.135: 0.1491% ( 13) 00:07:40.506 7108.135 - 7158.548: 0.2557% ( 15) 00:07:40.506 7158.548 - 7208.960: 0.2841% ( 4) 00:07:40.506 7208.960 - 7259.372: 0.3125% ( 4) 00:07:40.506 7259.372 - 7309.785: 0.3764% ( 9) 00:07:40.506 7309.785 - 7360.197: 0.4901% ( 16) 00:07:40.506 7360.197 - 7410.609: 0.5682% ( 11) 00:07:40.506 7410.609 - 7461.022: 0.6321% ( 9) 00:07:40.506 7461.022 - 7511.434: 0.7528% ( 17) 00:07:40.506 7511.434 - 7561.846: 1.0653% ( 44) 00:07:40.506 7561.846 - 7612.258: 1.5625% ( 70) 00:07:40.506 7612.258 - 7662.671: 2.3722% ( 114) 00:07:40.506 7662.671 - 7713.083: 3.8352% ( 206) 00:07:40.506 7713.083 - 7763.495: 5.3551% ( 214) 00:07:40.506 7763.495 - 7813.908: 7.2798% ( 271) 00:07:40.506 7813.908 - 7864.320: 9.0767% ( 253) 00:07:40.506 7864.320 - 7914.732: 11.0014% ( 271) 00:07:40.506 7914.732 - 7965.145: 12.9688% ( 277) 00:07:40.506 7965.145 - 8015.557: 15.0071% ( 287) 00:07:40.506 8015.557 - 8065.969: 16.8679% ( 262) 00:07:40.506 8065.969 - 8116.382: 19.3892% ( 355) 00:07:40.506 8116.382 - 8166.794: 22.0881% ( 380) 00:07:40.506 8166.794 - 8217.206: 24.6023% ( 354) 00:07:40.506 8217.206 - 8267.618: 27.2301% ( 370) 00:07:40.506 8267.618 - 8318.031: 30.0923% ( 403) 00:07:40.506 8318.031 - 8368.443: 32.9830% ( 407) 00:07:40.506 8368.443 - 8418.855: 35.8878% ( 409) 00:07:40.506 8418.855 - 8469.268: 38.9986% ( 438) 00:07:40.506 8469.268 - 8519.680: 41.8679% ( 404) 00:07:40.506 8519.680 - 8570.092: 44.6662% ( 394) 00:07:40.506 8570.092 - 8620.505: 47.2798% ( 368) 00:07:40.506 8620.505 - 8670.917: 49.8366% ( 360) 00:07:40.506 8670.917 - 8721.329: 52.3224% ( 350) 00:07:40.506 8721.329 - 8771.742: 55.1634% ( 400) 00:07:40.506 8771.742 - 8822.154: 57.7699% ( 367) 00:07:40.506 8822.154 - 8872.566: 60.4616% ( 379) 00:07:40.506 8872.566 - 8922.978: 62.7983% ( 329) 00:07:40.506 8922.978 - 8973.391: 64.9432% ( 302) 00:07:40.506 8973.391 - 9023.803: 66.9318% ( 280) 00:07:40.506 9023.803 - 9074.215: 68.7358% ( 254) 00:07:40.506 9074.215 - 9124.628: 70.3267% ( 224) 00:07:40.506 9124.628 - 9175.040: 72.0028% ( 236) 00:07:40.506 9175.040 - 9225.452: 73.6435% ( 231) 00:07:40.506 9225.452 - 9275.865: 75.0923% ( 204) 00:07:40.506 9275.865 - 9326.277: 76.3210% ( 173) 00:07:40.506 9326.277 - 9376.689: 77.8125% ( 210) 00:07:40.506 9376.689 - 9427.102: 79.1122% ( 183) 00:07:40.506 9427.102 - 9477.514: 80.3338% ( 172) 00:07:40.506 9477.514 - 9527.926: 81.4844% ( 162) 00:07:40.506 9527.926 - 9578.338: 82.4290% ( 133) 00:07:40.506 9578.338 - 9628.751: 83.3736% ( 133) 00:07:40.506 9628.751 - 9679.163: 84.0412% ( 94) 00:07:40.506 9679.163 - 9729.575: 84.5952% ( 78) 00:07:40.506 9729.575 - 9779.988: 85.1491% ( 78) 00:07:40.506 9779.988 - 9830.400: 85.5682% ( 59) 00:07:40.506 9830.400 - 9880.812: 86.0298% ( 65) 00:07:40.506 9880.812 - 9931.225: 86.5057% ( 67) 00:07:40.506 9931.225 - 9981.637: 86.9034% ( 56) 00:07:40.506 9981.637 - 10032.049: 87.2514% ( 49) 00:07:40.506 10032.049 - 10082.462: 87.5710% ( 45) 00:07:40.506 10082.462 - 10132.874: 87.9048% ( 47) 00:07:40.506 10132.874 - 10183.286: 88.3239% ( 59) 00:07:40.506 10183.286 - 10233.698: 88.6790% ( 50) 00:07:40.506 10233.698 - 10284.111: 88.9844% ( 43) 00:07:40.506 10284.111 - 10334.523: 89.2259% ( 34) 00:07:40.506 10334.523 - 10384.935: 89.4531% ( 32) 00:07:40.506 10384.935 - 10435.348: 89.6875% ( 33) 00:07:40.506 10435.348 - 10485.760: 89.8651% ( 25) 00:07:40.506 10485.760 - 10536.172: 90.0923% ( 32) 00:07:40.506 10536.172 - 10586.585: 90.2912% ( 28) 00:07:40.506 10586.585 - 10636.997: 90.4759% ( 26) 00:07:40.506 10636.997 - 10687.409: 90.7386% ( 37) 00:07:40.506 10687.409 - 10737.822: 90.9659% ( 32) 00:07:40.506 10737.822 - 10788.234: 91.1577% ( 27) 00:07:40.506 10788.234 - 10838.646: 91.3139% ( 22) 00:07:40.506 10838.646 - 10889.058: 91.4134% ( 14) 00:07:40.506 10889.058 - 10939.471: 91.5270% ( 16) 00:07:40.506 10939.471 - 10989.883: 91.7401% ( 30) 00:07:40.506 10989.883 - 11040.295: 91.9673% ( 32) 00:07:40.506 11040.295 - 11090.708: 92.1165% ( 21) 00:07:40.506 11090.708 - 11141.120: 92.1875% ( 10) 00:07:40.506 11141.120 - 11191.532: 92.3651% ( 25) 00:07:40.506 11191.532 - 11241.945: 92.5213% ( 22) 00:07:40.506 11241.945 - 11292.357: 92.6989% ( 25) 00:07:40.506 11292.357 - 11342.769: 92.8409% ( 20) 00:07:40.506 11342.769 - 11393.182: 92.9972% ( 22) 00:07:40.506 11393.182 - 11443.594: 93.1179% ( 17) 00:07:40.506 11443.594 - 11494.006: 93.3026% ( 26) 00:07:40.506 11494.006 - 11544.418: 93.4588% ( 22) 00:07:40.506 11544.418 - 11594.831: 93.5653% ( 15) 00:07:40.506 11594.831 - 11645.243: 93.7003% ( 19) 00:07:40.506 11645.243 - 11695.655: 93.8707% ( 24) 00:07:40.506 11695.655 - 11746.068: 93.9844% ( 16) 00:07:40.506 11746.068 - 11796.480: 94.1122% ( 18) 00:07:40.506 11796.480 - 11846.892: 94.3395% ( 32) 00:07:40.506 11846.892 - 11897.305: 94.4886% ( 21) 00:07:40.506 11897.305 - 11947.717: 94.6662% ( 25) 00:07:40.506 11947.717 - 11998.129: 94.7940% ( 18) 00:07:40.506 11998.129 - 12048.542: 94.9361% ( 20) 00:07:40.506 12048.542 - 12098.954: 95.0071% ( 10) 00:07:40.506 12098.954 - 12149.366: 95.0710% ( 9) 00:07:40.506 12149.366 - 12199.778: 95.1705% ( 14) 00:07:40.506 12199.778 - 12250.191: 95.2060% ( 5) 00:07:40.506 12250.191 - 12300.603: 95.3054% ( 14) 00:07:40.506 12300.603 - 12351.015: 95.3622% ( 8) 00:07:40.506 12351.015 - 12401.428: 95.4261% ( 9) 00:07:40.506 12401.428 - 12451.840: 95.5043% ( 11) 00:07:40.506 12451.840 - 12502.252: 95.5824% ( 11) 00:07:40.506 12502.252 - 12552.665: 95.6534% ( 10) 00:07:40.506 12552.665 - 12603.077: 95.7386% ( 12) 00:07:40.506 12603.077 - 12653.489: 95.8594% ( 17) 00:07:40.506 12653.489 - 12703.902: 95.9801% ( 17) 00:07:40.506 12703.902 - 12754.314: 96.0795% ( 14) 00:07:40.506 12754.314 - 12804.726: 96.2145% ( 19) 00:07:40.506 12804.726 - 12855.138: 96.2926% ( 11) 00:07:40.506 12855.138 - 12905.551: 96.4062% ( 16) 00:07:40.506 12905.551 - 13006.375: 96.6193% ( 30) 00:07:40.506 13006.375 - 13107.200: 96.7259% ( 15) 00:07:40.506 13107.200 - 13208.025: 96.9318% ( 29) 00:07:40.506 13208.025 - 13308.849: 97.1875% ( 36) 00:07:40.506 13308.849 - 13409.674: 97.3224% ( 19) 00:07:40.506 13409.674 - 13510.498: 97.4858% ( 23) 00:07:40.506 13510.498 - 13611.323: 97.6278% ( 20) 00:07:40.506 13611.323 - 13712.148: 97.7770% ( 21) 00:07:40.506 13712.148 - 13812.972: 98.0824% ( 43) 00:07:40.506 13812.972 - 13913.797: 98.1889% ( 15) 00:07:40.507 13913.797 - 14014.622: 98.2173% ( 4) 00:07:40.507 14115.446 - 14216.271: 98.2599% ( 6) 00:07:40.507 14216.271 - 14317.095: 98.3026% ( 6) 00:07:40.507 14317.095 - 14417.920: 98.3523% ( 7) 00:07:40.507 14417.920 - 14518.745: 98.4020% ( 7) 00:07:40.507 14518.745 - 14619.569: 98.4659% ( 9) 00:07:40.507 14619.569 - 14720.394: 98.4801% ( 2) 00:07:40.507 14720.394 - 14821.218: 98.4943% ( 2) 00:07:40.507 14821.218 - 14922.043: 98.5298% ( 5) 00:07:40.507 14922.043 - 15022.868: 98.5440% ( 2) 00:07:40.507 15022.868 - 15123.692: 98.5866% ( 6) 00:07:40.507 15123.692 - 15224.517: 98.6364% ( 7) 00:07:40.507 15224.517 - 15325.342: 98.7074% ( 10) 00:07:40.507 15325.342 - 15426.166: 98.7642% ( 8) 00:07:40.507 15426.166 - 15526.991: 98.9489% ( 26) 00:07:40.507 15526.991 - 15627.815: 98.9631% ( 2) 00:07:40.507 15728.640 - 15829.465: 98.9915% ( 4) 00:07:40.507 15829.465 - 15930.289: 99.0341% ( 6) 00:07:40.507 15930.289 - 16031.114: 99.0696% ( 5) 00:07:40.507 16031.114 - 16131.938: 99.0909% ( 3) 00:07:40.507 20669.046 - 20769.871: 99.0980% ( 1) 00:07:40.507 20769.871 - 20870.695: 99.1122% ( 2) 00:07:40.507 20870.695 - 20971.520: 99.1193% ( 1) 00:07:40.507 20971.520 - 21072.345: 99.1903% ( 10) 00:07:40.507 21072.345 - 21173.169: 99.2116% ( 3) 00:07:40.507 21173.169 - 21273.994: 99.2401% ( 4) 00:07:40.507 21273.994 - 21374.818: 99.2756% ( 5) 00:07:40.507 21374.818 - 21475.643: 99.3111% ( 5) 00:07:40.507 21475.643 - 21576.468: 99.3466% ( 5) 00:07:40.507 21576.468 - 21677.292: 99.3679% ( 3) 00:07:40.507 21677.292 - 21778.117: 99.4105% ( 6) 00:07:40.507 21778.117 - 21878.942: 99.4389% ( 4) 00:07:40.507 21878.942 - 21979.766: 99.4744% ( 5) 00:07:40.507 21979.766 - 22080.591: 99.5099% ( 5) 00:07:40.507 22080.591 - 22181.415: 99.5384% ( 4) 00:07:40.507 22181.415 - 22282.240: 99.5455% ( 1) 00:07:40.507 28835.840 - 29037.489: 99.5668% ( 3) 00:07:40.507 29037.489 - 29239.138: 99.6236% ( 8) 00:07:40.507 29239.138 - 29440.788: 99.6662% ( 6) 00:07:40.507 29440.788 - 29642.437: 99.7088% ( 6) 00:07:40.507 29642.437 - 29844.086: 99.7585% ( 7) 00:07:40.507 29844.086 - 30045.735: 99.8011% ( 6) 00:07:40.507 30045.735 - 30247.385: 99.8438% ( 6) 00:07:40.507 30247.385 - 30449.034: 99.8935% ( 7) 00:07:40.507 30449.034 - 30650.683: 99.9432% ( 7) 00:07:40.507 30650.683 - 30852.332: 99.9858% ( 6) 00:07:40.507 30852.332 - 31053.982: 100.0000% ( 2) 00:07:40.507 00:07:40.507 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:40.507 ============================================================================== 00:07:40.507 Range in us Cumulative IO count 00:07:40.507 6503.188 - 6553.600: 0.0142% ( 2) 00:07:40.507 6553.600 - 6604.012: 0.0426% ( 4) 00:07:40.507 6604.012 - 6654.425: 0.0994% ( 8) 00:07:40.507 6654.425 - 6704.837: 0.1634% ( 9) 00:07:40.507 6704.837 - 6755.249: 0.2202% ( 8) 00:07:40.507 6755.249 - 6805.662: 0.2983% ( 11) 00:07:40.507 6805.662 - 6856.074: 0.3409% ( 6) 00:07:40.507 6856.074 - 6906.486: 0.3906% ( 7) 00:07:40.507 6906.486 - 6956.898: 0.4261% ( 5) 00:07:40.507 6956.898 - 7007.311: 0.4474% ( 3) 00:07:40.507 7007.311 - 7057.723: 0.4545% ( 1) 00:07:40.507 7360.197 - 7410.609: 0.4688% ( 2) 00:07:40.507 7410.609 - 7461.022: 0.5043% ( 5) 00:07:40.507 7461.022 - 7511.434: 0.5469% ( 6) 00:07:40.507 7511.434 - 7561.846: 0.6250% ( 11) 00:07:40.507 7561.846 - 7612.258: 0.7528% ( 18) 00:07:40.507 7612.258 - 7662.671: 1.0085% ( 36) 00:07:40.507 7662.671 - 7713.083: 1.3636% ( 50) 00:07:40.507 7713.083 - 7763.495: 2.0739% ( 100) 00:07:40.507 7763.495 - 7813.908: 3.1321% ( 149) 00:07:40.507 7813.908 - 7864.320: 4.5241% ( 196) 00:07:40.507 7864.320 - 7914.732: 6.3068% ( 251) 00:07:40.507 7914.732 - 7965.145: 8.2599% ( 275) 00:07:40.507 7965.145 - 8015.557: 10.7173% ( 346) 00:07:40.507 8015.557 - 8065.969: 13.5014% ( 392) 00:07:40.507 8065.969 - 8116.382: 16.6619% ( 445) 00:07:40.507 8116.382 - 8166.794: 19.9077% ( 457) 00:07:40.507 8166.794 - 8217.206: 23.0611% ( 444) 00:07:40.507 8217.206 - 8267.618: 26.0795% ( 425) 00:07:40.507 8267.618 - 8318.031: 29.7017% ( 510) 00:07:40.507 8318.031 - 8368.443: 33.1392% ( 484) 00:07:40.507 8368.443 - 8418.855: 36.5128% ( 475) 00:07:40.507 8418.855 - 8469.268: 39.7088% ( 450) 00:07:40.507 8469.268 - 8519.680: 42.9119% ( 451) 00:07:40.507 8519.680 - 8570.092: 46.0298% ( 439) 00:07:40.507 8570.092 - 8620.505: 48.8210% ( 393) 00:07:40.507 8620.505 - 8670.917: 51.5057% ( 378) 00:07:40.507 8670.917 - 8721.329: 53.9702% ( 347) 00:07:40.507 8721.329 - 8771.742: 56.4062% ( 343) 00:07:40.507 8771.742 - 8822.154: 58.7855% ( 335) 00:07:40.507 8822.154 - 8872.566: 61.1435% ( 332) 00:07:40.507 8872.566 - 8922.978: 63.2244% ( 293) 00:07:40.507 8922.978 - 8973.391: 65.4119% ( 308) 00:07:40.507 8973.391 - 9023.803: 67.5781% ( 305) 00:07:40.507 9023.803 - 9074.215: 69.5312% ( 275) 00:07:40.507 9074.215 - 9124.628: 71.3281% ( 253) 00:07:40.507 9124.628 - 9175.040: 73.1818% ( 261) 00:07:40.507 9175.040 - 9225.452: 75.0568% ( 264) 00:07:40.507 9225.452 - 9275.865: 76.7116% ( 233) 00:07:40.507 9275.865 - 9326.277: 78.1676% ( 205) 00:07:40.507 9326.277 - 9376.689: 79.6023% ( 202) 00:07:40.507 9376.689 - 9427.102: 80.8168% ( 171) 00:07:40.507 9427.102 - 9477.514: 81.8182% ( 141) 00:07:40.507 9477.514 - 9527.926: 82.7060% ( 125) 00:07:40.507 9527.926 - 9578.338: 83.3949% ( 97) 00:07:40.507 9578.338 - 9628.751: 84.0767% ( 96) 00:07:40.507 9628.751 - 9679.163: 84.6662% ( 83) 00:07:40.507 9679.163 - 9729.575: 85.1420% ( 67) 00:07:40.507 9729.575 - 9779.988: 85.6463% ( 71) 00:07:40.507 9779.988 - 9830.400: 86.0014% ( 50) 00:07:40.507 9830.400 - 9880.812: 86.3991% ( 56) 00:07:40.507 9880.812 - 9931.225: 86.7116% ( 44) 00:07:40.507 9931.225 - 9981.637: 87.0597% ( 49) 00:07:40.507 9981.637 - 10032.049: 87.4148% ( 50) 00:07:40.507 10032.049 - 10082.462: 87.7699% ( 50) 00:07:40.507 10082.462 - 10132.874: 88.1108% ( 48) 00:07:40.507 10132.874 - 10183.286: 88.4659% ( 50) 00:07:40.507 10183.286 - 10233.698: 88.9276% ( 65) 00:07:40.507 10233.698 - 10284.111: 89.3395% ( 58) 00:07:40.507 10284.111 - 10334.523: 89.6236% ( 40) 00:07:40.507 10334.523 - 10384.935: 89.8438% ( 31) 00:07:40.507 10384.935 - 10435.348: 90.0000% ( 22) 00:07:40.507 10435.348 - 10485.760: 90.2699% ( 38) 00:07:40.507 10485.760 - 10536.172: 90.5824% ( 44) 00:07:40.507 10536.172 - 10586.585: 90.8239% ( 34) 00:07:40.507 10586.585 - 10636.997: 90.9943% ( 24) 00:07:40.507 10636.997 - 10687.409: 91.2003% ( 29) 00:07:40.507 10687.409 - 10737.822: 91.4276% ( 32) 00:07:40.507 10737.822 - 10788.234: 91.6903% ( 37) 00:07:40.507 10788.234 - 10838.646: 91.9460% ( 36) 00:07:40.507 10838.646 - 10889.058: 92.1378% ( 27) 00:07:40.507 10889.058 - 10939.471: 92.3722% ( 33) 00:07:40.507 10939.471 - 10989.883: 92.5426% ( 24) 00:07:40.507 10989.883 - 11040.295: 92.6776% ( 19) 00:07:40.507 11040.295 - 11090.708: 92.8267% ( 21) 00:07:40.507 11090.708 - 11141.120: 92.9474% ( 17) 00:07:40.507 11141.120 - 11191.532: 93.0611% ( 16) 00:07:40.507 11191.532 - 11241.945: 93.1676% ( 15) 00:07:40.507 11241.945 - 11292.357: 93.2884% ( 17) 00:07:40.507 11292.357 - 11342.769: 93.4162% ( 18) 00:07:40.507 11342.769 - 11393.182: 93.5085% ( 13) 00:07:40.507 11393.182 - 11443.594: 93.6151% ( 15) 00:07:40.507 11443.594 - 11494.006: 93.7216% ( 15) 00:07:40.507 11494.006 - 11544.418: 93.8423% ( 17) 00:07:40.507 11544.418 - 11594.831: 93.9489% ( 15) 00:07:40.507 11594.831 - 11645.243: 94.0483% ( 14) 00:07:40.507 11645.243 - 11695.655: 94.1690% ( 17) 00:07:40.507 11695.655 - 11746.068: 94.3040% ( 19) 00:07:40.507 11746.068 - 11796.480: 94.4318% ( 18) 00:07:40.507 11796.480 - 11846.892: 94.5455% ( 16) 00:07:40.507 11846.892 - 11897.305: 94.6378% ( 13) 00:07:40.507 11897.305 - 11947.717: 94.7585% ( 17) 00:07:40.507 11947.717 - 11998.129: 94.8509% ( 13) 00:07:40.507 11998.129 - 12048.542: 94.9361% ( 12) 00:07:40.507 12048.542 - 12098.954: 95.0213% ( 12) 00:07:40.507 12098.954 - 12149.366: 95.0852% ( 9) 00:07:40.507 12149.366 - 12199.778: 95.1278% ( 6) 00:07:40.507 12199.778 - 12250.191: 95.1776% ( 7) 00:07:40.507 12250.191 - 12300.603: 95.2202% ( 6) 00:07:40.507 12300.603 - 12351.015: 95.2770% ( 8) 00:07:40.507 12351.015 - 12401.428: 95.3551% ( 11) 00:07:40.507 12401.428 - 12451.840: 95.4474% ( 13) 00:07:40.507 12451.840 - 12502.252: 95.6037% ( 22) 00:07:40.507 12502.252 - 12552.665: 95.7955% ( 27) 00:07:40.507 12552.665 - 12603.077: 95.9446% ( 21) 00:07:40.507 12603.077 - 12653.489: 96.0085% ( 9) 00:07:40.507 12653.489 - 12703.902: 96.0724% ( 9) 00:07:40.507 12703.902 - 12754.314: 96.1151% ( 6) 00:07:40.507 12754.314 - 12804.726: 96.1790% ( 9) 00:07:40.507 12804.726 - 12855.138: 96.2429% ( 9) 00:07:40.507 12855.138 - 12905.551: 96.3210% ( 11) 00:07:40.507 12905.551 - 13006.375: 96.4773% ( 22) 00:07:40.507 13006.375 - 13107.200: 96.6193% ( 20) 00:07:40.507 13107.200 - 13208.025: 96.9744% ( 50) 00:07:40.507 13208.025 - 13308.849: 97.2301% ( 36) 00:07:40.507 13308.849 - 13409.674: 97.6705% ( 62) 00:07:40.507 13409.674 - 13510.498: 97.8409% ( 24) 00:07:40.507 13510.498 - 13611.323: 98.0469% ( 29) 00:07:40.507 13611.323 - 13712.148: 98.1960% ( 21) 00:07:40.507 13712.148 - 13812.972: 98.3665% ( 24) 00:07:40.507 13812.972 - 13913.797: 98.5014% ( 19) 00:07:40.508 13913.797 - 14014.622: 98.6080% ( 15) 00:07:40.508 14014.622 - 14115.446: 98.6364% ( 4) 00:07:40.508 15325.342 - 15426.166: 98.7216% ( 12) 00:07:40.508 15426.166 - 15526.991: 98.8849% ( 23) 00:07:40.508 15526.991 - 15627.815: 98.9773% ( 13) 00:07:40.508 15627.815 - 15728.640: 99.0128% ( 5) 00:07:40.508 15728.640 - 15829.465: 99.0483% ( 5) 00:07:40.508 15829.465 - 15930.289: 99.0838% ( 5) 00:07:40.508 15930.289 - 16031.114: 99.0909% ( 1) 00:07:40.508 21173.169 - 21273.994: 99.1122% ( 3) 00:07:40.508 21273.994 - 21374.818: 99.1406% ( 4) 00:07:40.508 21374.818 - 21475.643: 99.1690% ( 4) 00:07:40.508 21475.643 - 21576.468: 99.1974% ( 4) 00:07:40.508 21576.468 - 21677.292: 99.2188% ( 3) 00:07:40.508 21677.292 - 21778.117: 99.2472% ( 4) 00:07:40.508 21778.117 - 21878.942: 99.2756% ( 4) 00:07:40.508 21878.942 - 21979.766: 99.3040% ( 4) 00:07:40.508 21979.766 - 22080.591: 99.3324% ( 4) 00:07:40.508 22080.591 - 22181.415: 99.3537% ( 3) 00:07:40.508 22181.415 - 22282.240: 99.3750% ( 3) 00:07:40.508 22282.240 - 22383.065: 99.4034% ( 4) 00:07:40.508 22383.065 - 22483.889: 99.4318% ( 4) 00:07:40.508 22483.889 - 22584.714: 99.4602% ( 4) 00:07:40.508 22584.714 - 22685.538: 99.4886% ( 4) 00:07:40.508 22685.538 - 22786.363: 99.5170% ( 4) 00:07:40.508 22786.363 - 22887.188: 99.5455% ( 4) 00:07:40.508 28634.191 - 28835.840: 99.5526% ( 1) 00:07:40.508 28835.840 - 29037.489: 99.6023% ( 7) 00:07:40.508 29037.489 - 29239.138: 99.6591% ( 8) 00:07:40.508 29239.138 - 29440.788: 99.7088% ( 7) 00:07:40.508 29440.788 - 29642.437: 99.7514% ( 6) 00:07:40.508 29642.437 - 29844.086: 99.7940% ( 6) 00:07:40.508 29844.086 - 30045.735: 99.8580% ( 9) 00:07:40.508 30045.735 - 30247.385: 99.9148% ( 8) 00:07:40.508 30247.385 - 30449.034: 99.9716% ( 8) 00:07:40.508 30449.034 - 30650.683: 100.0000% ( 4) 00:07:40.508 00:07:40.508 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:40.508 ============================================================================== 00:07:40.508 Range in us Cumulative IO count 00:07:40.508 5343.705 - 5368.911: 0.0142% ( 2) 00:07:40.508 5368.911 - 5394.117: 0.0426% ( 4) 00:07:40.508 5394.117 - 5419.323: 0.0568% ( 2) 00:07:40.508 5419.323 - 5444.529: 0.0852% ( 4) 00:07:40.508 5444.529 - 5469.735: 0.1065% ( 3) 00:07:40.508 5469.735 - 5494.942: 0.1278% ( 3) 00:07:40.508 5494.942 - 5520.148: 0.1705% ( 6) 00:07:40.508 5520.148 - 5545.354: 0.2415% ( 10) 00:07:40.508 5545.354 - 5570.560: 0.2628% ( 3) 00:07:40.508 5570.560 - 5595.766: 0.2770% ( 2) 00:07:40.508 5595.766 - 5620.972: 0.2841% ( 1) 00:07:40.508 5620.972 - 5646.178: 0.2983% ( 2) 00:07:40.508 5646.178 - 5671.385: 0.3196% ( 3) 00:07:40.508 5671.385 - 5696.591: 0.3267% ( 1) 00:07:40.508 5696.591 - 5721.797: 0.3480% ( 3) 00:07:40.508 5721.797 - 5747.003: 0.3622% ( 2) 00:07:40.508 5747.003 - 5772.209: 0.3693% ( 1) 00:07:40.508 5772.209 - 5797.415: 0.3906% ( 3) 00:07:40.508 5797.415 - 5822.622: 0.4048% ( 2) 00:07:40.508 5822.622 - 5847.828: 0.4190% ( 2) 00:07:40.508 5847.828 - 5873.034: 0.4332% ( 2) 00:07:40.508 5873.034 - 5898.240: 0.4474% ( 2) 00:07:40.508 5898.240 - 5923.446: 0.4545% ( 1) 00:07:40.508 7360.197 - 7410.609: 0.4616% ( 1) 00:07:40.508 7410.609 - 7461.022: 0.4901% ( 4) 00:07:40.508 7461.022 - 7511.434: 0.5895% ( 14) 00:07:40.508 7511.434 - 7561.846: 0.7315% ( 20) 00:07:40.508 7561.846 - 7612.258: 0.9588% ( 32) 00:07:40.508 7612.258 - 7662.671: 1.3068% ( 49) 00:07:40.508 7662.671 - 7713.083: 2.0810% ( 109) 00:07:40.508 7713.083 - 7763.495: 3.0043% ( 130) 00:07:40.508 7763.495 - 7813.908: 4.2188% ( 171) 00:07:40.508 7813.908 - 7864.320: 5.6321% ( 199) 00:07:40.508 7864.320 - 7914.732: 7.4858% ( 261) 00:07:40.508 7914.732 - 7965.145: 9.5241% ( 287) 00:07:40.508 7965.145 - 8015.557: 11.8892% ( 333) 00:07:40.508 8015.557 - 8065.969: 14.5526% ( 375) 00:07:40.508 8065.969 - 8116.382: 17.1733% ( 369) 00:07:40.508 8116.382 - 8166.794: 19.9361% ( 389) 00:07:40.508 8166.794 - 8217.206: 23.1037% ( 446) 00:07:40.508 8217.206 - 8267.618: 26.2429% ( 442) 00:07:40.508 8267.618 - 8318.031: 29.6946% ( 486) 00:07:40.508 8318.031 - 8368.443: 33.0185% ( 468) 00:07:40.508 8368.443 - 8418.855: 36.4560% ( 484) 00:07:40.508 8418.855 - 8469.268: 39.7514% ( 464) 00:07:40.508 8469.268 - 8519.680: 43.0682% ( 467) 00:07:40.508 8519.680 - 8570.092: 45.7741% ( 381) 00:07:40.508 8570.092 - 8620.505: 48.7216% ( 415) 00:07:40.508 8620.505 - 8670.917: 51.3281% ( 367) 00:07:40.508 8670.917 - 8721.329: 54.0412% ( 382) 00:07:40.508 8721.329 - 8771.742: 56.7685% ( 384) 00:07:40.508 8771.742 - 8822.154: 59.3750% ( 367) 00:07:40.508 8822.154 - 8872.566: 61.5838% ( 311) 00:07:40.508 8872.566 - 8922.978: 63.6222% ( 287) 00:07:40.508 8922.978 - 8973.391: 65.6108% ( 280) 00:07:40.508 8973.391 - 9023.803: 67.6491% ( 287) 00:07:40.508 9023.803 - 9074.215: 69.7159% ( 291) 00:07:40.508 9074.215 - 9124.628: 71.6335% ( 270) 00:07:40.508 9124.628 - 9175.040: 73.4659% ( 258) 00:07:40.508 9175.040 - 9225.452: 75.0000% ( 216) 00:07:40.508 9225.452 - 9275.865: 76.6761% ( 236) 00:07:40.508 9275.865 - 9326.277: 78.4588% ( 251) 00:07:40.508 9326.277 - 9376.689: 79.9574% ( 211) 00:07:40.508 9376.689 - 9427.102: 81.3281% ( 193) 00:07:40.508 9427.102 - 9477.514: 82.3793% ( 148) 00:07:40.508 9477.514 - 9527.926: 83.0611% ( 96) 00:07:40.508 9527.926 - 9578.338: 83.7145% ( 92) 00:07:40.508 9578.338 - 9628.751: 84.2685% ( 78) 00:07:40.508 9628.751 - 9679.163: 84.6946% ( 60) 00:07:40.508 9679.163 - 9729.575: 84.9787% ( 40) 00:07:40.508 9729.575 - 9779.988: 85.2628% ( 40) 00:07:40.508 9779.988 - 9830.400: 85.5682% ( 43) 00:07:40.508 9830.400 - 9880.812: 85.8452% ( 39) 00:07:40.508 9880.812 - 9931.225: 86.1435% ( 42) 00:07:40.508 9931.225 - 9981.637: 86.3849% ( 34) 00:07:40.508 9981.637 - 10032.049: 86.7045% ( 45) 00:07:40.508 10032.049 - 10082.462: 87.0312% ( 46) 00:07:40.508 10082.462 - 10132.874: 87.4645% ( 61) 00:07:40.508 10132.874 - 10183.286: 88.0540% ( 83) 00:07:40.508 10183.286 - 10233.698: 88.5298% ( 67) 00:07:40.508 10233.698 - 10284.111: 89.0625% ( 75) 00:07:40.508 10284.111 - 10334.523: 89.3750% ( 44) 00:07:40.508 10334.523 - 10384.935: 89.6165% ( 34) 00:07:40.508 10384.935 - 10435.348: 89.9929% ( 53) 00:07:40.508 10435.348 - 10485.760: 90.3409% ( 49) 00:07:40.508 10485.760 - 10536.172: 90.5824% ( 34) 00:07:40.508 10536.172 - 10586.585: 90.7812% ( 28) 00:07:40.508 10586.585 - 10636.997: 90.9659% ( 26) 00:07:40.508 10636.997 - 10687.409: 91.1364% ( 24) 00:07:40.508 10687.409 - 10737.822: 91.2855% ( 21) 00:07:40.508 10737.822 - 10788.234: 91.4489% ( 23) 00:07:40.508 10788.234 - 10838.646: 91.5696% ( 17) 00:07:40.508 10838.646 - 10889.058: 91.6761% ( 15) 00:07:40.508 10889.058 - 10939.471: 91.7898% ( 16) 00:07:40.508 10939.471 - 10989.883: 91.8963% ( 15) 00:07:40.508 10989.883 - 11040.295: 92.1165% ( 31) 00:07:40.508 11040.295 - 11090.708: 92.3722% ( 36) 00:07:40.508 11090.708 - 11141.120: 92.5852% ( 30) 00:07:40.508 11141.120 - 11191.532: 92.7841% ( 28) 00:07:40.508 11191.532 - 11241.945: 93.0611% ( 39) 00:07:40.508 11241.945 - 11292.357: 93.2599% ( 28) 00:07:40.508 11292.357 - 11342.769: 93.4233% ( 23) 00:07:40.508 11342.769 - 11393.182: 93.5724% ( 21) 00:07:40.508 11393.182 - 11443.594: 93.6932% ( 17) 00:07:40.508 11443.594 - 11494.006: 93.8139% ( 17) 00:07:40.508 11494.006 - 11544.418: 93.9347% ( 17) 00:07:40.508 11544.418 - 11594.831: 94.0412% ( 15) 00:07:40.508 11594.831 - 11645.243: 94.1264% ( 12) 00:07:40.508 11645.243 - 11695.655: 94.2259% ( 14) 00:07:40.508 11695.655 - 11746.068: 94.3182% ( 13) 00:07:40.508 11746.068 - 11796.480: 94.4176% ( 14) 00:07:40.508 11796.480 - 11846.892: 94.5455% ( 18) 00:07:40.508 11846.892 - 11897.305: 94.6804% ( 19) 00:07:40.508 11897.305 - 11947.717: 94.7727% ( 13) 00:07:40.508 11947.717 - 11998.129: 94.8580% ( 12) 00:07:40.508 11998.129 - 12048.542: 94.9503% ( 13) 00:07:40.508 12048.542 - 12098.954: 95.0426% ( 13) 00:07:40.508 12098.954 - 12149.366: 95.1491% ( 15) 00:07:40.508 12149.366 - 12199.778: 95.2131% ( 9) 00:07:40.508 12199.778 - 12250.191: 95.3196% ( 15) 00:07:40.508 12250.191 - 12300.603: 95.3764% ( 8) 00:07:40.508 12300.603 - 12351.015: 95.4688% ( 13) 00:07:40.508 12351.015 - 12401.428: 95.5469% ( 11) 00:07:40.508 12401.428 - 12451.840: 95.6818% ( 19) 00:07:40.508 12451.840 - 12502.252: 95.7599% ( 11) 00:07:40.508 12502.252 - 12552.665: 95.8594% ( 14) 00:07:40.508 12552.665 - 12603.077: 95.9730% ( 16) 00:07:40.508 12603.077 - 12653.489: 96.0369% ( 9) 00:07:40.508 12653.489 - 12703.902: 96.1009% ( 9) 00:07:40.508 12703.902 - 12754.314: 96.2003% ( 14) 00:07:40.509 12754.314 - 12804.726: 96.3068% ( 15) 00:07:40.509 12804.726 - 12855.138: 96.4276% ( 17) 00:07:40.509 12855.138 - 12905.551: 96.5980% ( 24) 00:07:40.509 12905.551 - 13006.375: 96.8466% ( 35) 00:07:40.509 13006.375 - 13107.200: 97.0384% ( 27) 00:07:40.509 13107.200 - 13208.025: 97.2159% ( 25) 00:07:40.509 13208.025 - 13308.849: 97.4006% ( 26) 00:07:40.509 13308.849 - 13409.674: 97.5639% ( 23) 00:07:40.509 13409.674 - 13510.498: 97.7912% ( 32) 00:07:40.509 13510.498 - 13611.323: 97.9261% ( 19) 00:07:40.509 13611.323 - 13712.148: 98.0114% ( 12) 00:07:40.509 13712.148 - 13812.972: 98.1179% ( 15) 00:07:40.509 13812.972 - 13913.797: 98.2599% ( 20) 00:07:40.509 13913.797 - 14014.622: 98.4020% ( 20) 00:07:40.509 14014.622 - 14115.446: 98.5014% ( 14) 00:07:40.509 14115.446 - 14216.271: 98.5724% ( 10) 00:07:40.509 14216.271 - 14317.095: 98.6151% ( 6) 00:07:40.509 14317.095 - 14417.920: 98.6364% ( 3) 00:07:40.509 15022.868 - 15123.692: 98.6435% ( 1) 00:07:40.509 15123.692 - 15224.517: 98.6932% ( 7) 00:07:40.509 15224.517 - 15325.342: 98.7571% ( 9) 00:07:40.509 15325.342 - 15426.166: 98.8778% ( 17) 00:07:40.509 15426.166 - 15526.991: 98.9205% ( 6) 00:07:40.509 15526.991 - 15627.815: 98.9489% ( 4) 00:07:40.509 15627.815 - 15728.640: 98.9773% ( 4) 00:07:40.509 15728.640 - 15829.465: 99.0057% ( 4) 00:07:40.509 15829.465 - 15930.289: 99.0341% ( 4) 00:07:40.509 15930.289 - 16031.114: 99.0625% ( 4) 00:07:40.509 16031.114 - 16131.938: 99.0838% ( 3) 00:07:40.509 16131.938 - 16232.763: 99.0909% ( 1) 00:07:40.509 21374.818 - 21475.643: 99.1122% ( 3) 00:07:40.509 21475.643 - 21576.468: 99.1548% ( 6) 00:07:40.509 21576.468 - 21677.292: 99.1974% ( 6) 00:07:40.509 21677.292 - 21778.117: 99.2330% ( 5) 00:07:40.509 21778.117 - 21878.942: 99.2827% ( 7) 00:07:40.509 21878.942 - 21979.766: 99.3111% ( 4) 00:07:40.509 21979.766 - 22080.591: 99.3537% ( 6) 00:07:40.509 22080.591 - 22181.415: 99.3821% ( 4) 00:07:40.509 22181.415 - 22282.240: 99.4105% ( 4) 00:07:40.509 22282.240 - 22383.065: 99.4318% ( 3) 00:07:40.509 22383.065 - 22483.889: 99.4531% ( 3) 00:07:40.509 22483.889 - 22584.714: 99.4744% ( 3) 00:07:40.509 22584.714 - 22685.538: 99.4886% ( 2) 00:07:40.509 22685.538 - 22786.363: 99.5099% ( 3) 00:07:40.509 22786.363 - 22887.188: 99.5312% ( 3) 00:07:40.509 22887.188 - 22988.012: 99.5455% ( 2) 00:07:40.509 28230.892 - 28432.542: 99.5739% ( 4) 00:07:40.509 28432.542 - 28634.191: 99.6023% ( 4) 00:07:40.509 28835.840 - 29037.489: 99.6094% ( 1) 00:07:40.509 29037.489 - 29239.138: 99.6591% ( 7) 00:07:40.509 29239.138 - 29440.788: 99.7159% ( 8) 00:07:40.509 29440.788 - 29642.437: 99.7585% ( 6) 00:07:40.509 29642.437 - 29844.086: 99.8153% ( 8) 00:07:40.509 29844.086 - 30045.735: 99.8580% ( 6) 00:07:40.509 30045.735 - 30247.385: 99.9148% ( 8) 00:07:40.509 30247.385 - 30449.034: 99.9645% ( 7) 00:07:40.509 30449.034 - 30650.683: 100.0000% ( 5) 00:07:40.509 00:07:40.509 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:40.509 ============================================================================== 00:07:40.509 Range in us Cumulative IO count 00:07:40.509 5016.025 - 5041.231: 0.0071% ( 1) 00:07:40.509 5041.231 - 5066.437: 0.0213% ( 2) 00:07:40.509 5066.437 - 5091.643: 0.0355% ( 2) 00:07:40.509 5091.643 - 5116.849: 0.0639% ( 4) 00:07:40.509 5116.849 - 5142.055: 0.0852% ( 3) 00:07:40.509 5142.055 - 5167.262: 0.1136% ( 4) 00:07:40.509 5167.262 - 5192.468: 0.1562% ( 6) 00:07:40.509 5192.468 - 5217.674: 0.1847% ( 4) 00:07:40.509 5217.674 - 5242.880: 0.2131% ( 4) 00:07:40.509 5242.880 - 5268.086: 0.2486% ( 5) 00:07:40.509 5268.086 - 5293.292: 0.2841% ( 5) 00:07:40.509 5293.292 - 5318.498: 0.3196% ( 5) 00:07:40.509 5318.498 - 5343.705: 0.3480% ( 4) 00:07:40.509 5343.705 - 5368.911: 0.3764% ( 4) 00:07:40.509 5368.911 - 5394.117: 0.3977% ( 3) 00:07:40.509 5394.117 - 5419.323: 0.4190% ( 3) 00:07:40.509 5419.323 - 5444.529: 0.4332% ( 2) 00:07:40.509 5444.529 - 5469.735: 0.4474% ( 2) 00:07:40.509 5469.735 - 5494.942: 0.4545% ( 1) 00:07:40.509 7360.197 - 7410.609: 0.4616% ( 1) 00:07:40.509 7410.609 - 7461.022: 0.4972% ( 5) 00:07:40.509 7461.022 - 7511.434: 0.5824% ( 12) 00:07:40.509 7511.434 - 7561.846: 0.6960% ( 16) 00:07:40.509 7561.846 - 7612.258: 0.8594% ( 23) 00:07:40.509 7612.258 - 7662.671: 1.2287% ( 52) 00:07:40.509 7662.671 - 7713.083: 1.8466% ( 87) 00:07:40.509 7713.083 - 7763.495: 2.6491% ( 113) 00:07:40.509 7763.495 - 7813.908: 3.8352% ( 167) 00:07:40.509 7813.908 - 7864.320: 5.3835% ( 218) 00:07:40.509 7864.320 - 7914.732: 7.1236% ( 245) 00:07:40.509 7914.732 - 7965.145: 9.1903% ( 291) 00:07:40.509 7965.145 - 8015.557: 11.6548% ( 347) 00:07:40.509 8015.557 - 8065.969: 14.2969% ( 372) 00:07:40.509 8065.969 - 8116.382: 17.0312% ( 385) 00:07:40.509 8116.382 - 8166.794: 19.7656% ( 385) 00:07:40.509 8166.794 - 8217.206: 23.0966% ( 469) 00:07:40.509 8217.206 - 8267.618: 26.8111% ( 523) 00:07:40.509 8267.618 - 8318.031: 30.1207% ( 466) 00:07:40.509 8318.031 - 8368.443: 33.4588% ( 470) 00:07:40.509 8368.443 - 8418.855: 36.9744% ( 495) 00:07:40.509 8418.855 - 8469.268: 40.6250% ( 514) 00:07:40.509 8469.268 - 8519.680: 43.7855% ( 445) 00:07:40.509 8519.680 - 8570.092: 46.5341% ( 387) 00:07:40.509 8570.092 - 8620.505: 49.1051% ( 362) 00:07:40.509 8620.505 - 8670.917: 51.9247% ( 397) 00:07:40.509 8670.917 - 8721.329: 54.7443% ( 397) 00:07:40.509 8721.329 - 8771.742: 57.4219% ( 377) 00:07:40.509 8771.742 - 8822.154: 59.9432% ( 355) 00:07:40.509 8822.154 - 8872.566: 62.3864% ( 344) 00:07:40.509 8872.566 - 8922.978: 64.6520% ( 319) 00:07:40.509 8922.978 - 8973.391: 66.5980% ( 274) 00:07:40.509 8973.391 - 9023.803: 68.4659% ( 263) 00:07:40.509 9023.803 - 9074.215: 70.1989% ( 244) 00:07:40.509 9074.215 - 9124.628: 71.7614% ( 220) 00:07:40.509 9124.628 - 9175.040: 73.3523% ( 224) 00:07:40.509 9175.040 - 9225.452: 74.9290% ( 222) 00:07:40.509 9225.452 - 9275.865: 76.3920% ( 206) 00:07:40.509 9275.865 - 9326.277: 77.7628% ( 193) 00:07:40.509 9326.277 - 9376.689: 78.9773% ( 171) 00:07:40.509 9376.689 - 9427.102: 80.2131% ( 174) 00:07:40.509 9427.102 - 9477.514: 81.3494% ( 160) 00:07:40.509 9477.514 - 9527.926: 82.2585% ( 128) 00:07:40.509 9527.926 - 9578.338: 82.9261% ( 94) 00:07:40.509 9578.338 - 9628.751: 83.5653% ( 90) 00:07:40.509 9628.751 - 9679.163: 84.0554% ( 69) 00:07:40.509 9679.163 - 9729.575: 84.5241% ( 66) 00:07:40.509 9729.575 - 9779.988: 84.8864% ( 51) 00:07:40.509 9779.988 - 9830.400: 85.2699% ( 54) 00:07:40.509 9830.400 - 9880.812: 85.7173% ( 63) 00:07:40.509 9880.812 - 9931.225: 86.1932% ( 67) 00:07:40.509 9931.225 - 9981.637: 86.7827% ( 83) 00:07:40.509 9981.637 - 10032.049: 87.2372% ( 64) 00:07:40.509 10032.049 - 10082.462: 87.6847% ( 63) 00:07:40.509 10082.462 - 10132.874: 88.0895% ( 57) 00:07:40.509 10132.874 - 10183.286: 88.4588% ( 52) 00:07:40.509 10183.286 - 10233.698: 88.8210% ( 51) 00:07:40.509 10233.698 - 10284.111: 89.1406% ( 45) 00:07:40.509 10284.111 - 10334.523: 89.3750% ( 33) 00:07:40.509 10334.523 - 10384.935: 89.6449% ( 38) 00:07:40.509 10384.935 - 10435.348: 89.9148% ( 38) 00:07:40.509 10435.348 - 10485.760: 90.1065% ( 27) 00:07:40.509 10485.760 - 10536.172: 90.3551% ( 35) 00:07:40.509 10536.172 - 10586.585: 90.6605% ( 43) 00:07:40.509 10586.585 - 10636.997: 91.0298% ( 52) 00:07:40.509 10636.997 - 10687.409: 91.3920% ( 51) 00:07:40.509 10687.409 - 10737.822: 91.6548% ( 37) 00:07:40.509 10737.822 - 10788.234: 91.8821% ( 32) 00:07:40.509 10788.234 - 10838.646: 92.1023% ( 31) 00:07:40.509 10838.646 - 10889.058: 92.3224% ( 31) 00:07:40.509 10889.058 - 10939.471: 92.4645% ( 20) 00:07:40.509 10939.471 - 10989.883: 92.6207% ( 22) 00:07:40.509 10989.883 - 11040.295: 92.7486% ( 18) 00:07:40.509 11040.295 - 11090.708: 92.8480% ( 14) 00:07:40.509 11090.708 - 11141.120: 92.9545% ( 15) 00:07:40.509 11141.120 - 11191.532: 93.0611% ( 15) 00:07:40.509 11191.532 - 11241.945: 93.2173% ( 22) 00:07:40.509 11241.945 - 11292.357: 93.3168% ( 14) 00:07:40.509 11292.357 - 11342.769: 93.4233% ( 15) 00:07:40.509 11342.769 - 11393.182: 93.5227% ( 14) 00:07:40.509 11393.182 - 11443.594: 93.6364% ( 16) 00:07:40.509 11443.594 - 11494.006: 93.7571% ( 17) 00:07:40.509 11494.006 - 11544.418: 93.8423% ( 12) 00:07:40.509 11544.418 - 11594.831: 93.8991% ( 8) 00:07:40.509 11594.831 - 11645.243: 93.9773% ( 11) 00:07:40.509 11645.243 - 11695.655: 94.0128% ( 5) 00:07:40.509 11695.655 - 11746.068: 94.1051% ( 13) 00:07:40.509 11746.068 - 11796.480: 94.2045% ( 14) 00:07:40.509 11796.480 - 11846.892: 94.3040% ( 14) 00:07:40.509 11846.892 - 11897.305: 94.4318% ( 18) 00:07:40.509 11897.305 - 11947.717: 94.5881% ( 22) 00:07:40.509 11947.717 - 11998.129: 94.7088% ( 17) 00:07:40.509 11998.129 - 12048.542: 94.7940% ( 12) 00:07:40.509 12048.542 - 12098.954: 94.9077% ( 16) 00:07:40.509 12098.954 - 12149.366: 95.0639% ( 22) 00:07:40.509 12149.366 - 12199.778: 95.2628% ( 28) 00:07:40.509 12199.778 - 12250.191: 95.4830% ( 31) 00:07:40.509 12250.191 - 12300.603: 95.6676% ( 26) 00:07:40.509 12300.603 - 12351.015: 95.8807% ( 30) 00:07:40.509 12351.015 - 12401.428: 96.0724% ( 27) 00:07:40.509 12401.428 - 12451.840: 96.1506% ( 11) 00:07:40.509 12451.840 - 12502.252: 96.2571% ( 15) 00:07:40.509 12502.252 - 12552.665: 96.3210% ( 9) 00:07:40.509 12552.665 - 12603.077: 96.3991% ( 11) 00:07:40.509 12603.077 - 12653.489: 96.4631% ( 9) 00:07:40.509 12653.489 - 12703.902: 96.5483% ( 12) 00:07:40.509 12703.902 - 12754.314: 96.6122% ( 9) 00:07:40.510 12754.314 - 12804.726: 96.6690% ( 8) 00:07:40.510 12804.726 - 12855.138: 96.7259% ( 8) 00:07:40.510 12855.138 - 12905.551: 96.7898% ( 9) 00:07:40.510 12905.551 - 13006.375: 96.9389% ( 21) 00:07:40.510 13006.375 - 13107.200: 97.1946% ( 36) 00:07:40.510 13107.200 - 13208.025: 97.3011% ( 15) 00:07:40.510 13208.025 - 13308.849: 97.3864% ( 12) 00:07:40.510 13308.849 - 13409.674: 97.5000% ( 16) 00:07:40.510 13409.674 - 13510.498: 97.6278% ( 18) 00:07:40.510 13510.498 - 13611.323: 97.7486% ( 17) 00:07:40.510 13611.323 - 13712.148: 97.8338% ( 12) 00:07:40.510 13712.148 - 13812.972: 97.9119% ( 11) 00:07:40.510 13812.972 - 13913.797: 97.9901% ( 11) 00:07:40.510 13913.797 - 14014.622: 98.0611% ( 10) 00:07:40.510 14014.622 - 14115.446: 98.1392% ( 11) 00:07:40.510 14115.446 - 14216.271: 98.2173% ( 11) 00:07:40.510 14216.271 - 14317.095: 98.2741% ( 8) 00:07:40.510 14317.095 - 14417.920: 98.3949% ( 17) 00:07:40.510 14417.920 - 14518.745: 98.5085% ( 16) 00:07:40.510 14518.745 - 14619.569: 98.6648% ( 22) 00:07:40.510 14619.569 - 14720.394: 98.7784% ( 16) 00:07:40.510 14720.394 - 14821.218: 98.8636% ( 12) 00:07:40.510 14821.218 - 14922.043: 98.9205% ( 8) 00:07:40.510 14922.043 - 15022.868: 98.9631% ( 6) 00:07:40.510 15022.868 - 15123.692: 98.9915% ( 4) 00:07:40.510 15123.692 - 15224.517: 99.0199% ( 4) 00:07:40.510 15224.517 - 15325.342: 99.0412% ( 3) 00:07:40.510 15325.342 - 15426.166: 99.0696% ( 4) 00:07:40.510 15426.166 - 15526.991: 99.0909% ( 3) 00:07:40.510 21273.994 - 21374.818: 99.1122% ( 3) 00:07:40.510 21374.818 - 21475.643: 99.1406% ( 4) 00:07:40.510 21475.643 - 21576.468: 99.1832% ( 6) 00:07:40.510 21576.468 - 21677.292: 99.1974% ( 2) 00:07:40.510 21677.292 - 21778.117: 99.2330% ( 5) 00:07:40.510 21778.117 - 21878.942: 99.2614% ( 4) 00:07:40.510 21878.942 - 21979.766: 99.2969% ( 5) 00:07:40.510 21979.766 - 22080.591: 99.3182% ( 3) 00:07:40.510 22080.591 - 22181.415: 99.3466% ( 4) 00:07:40.510 22181.415 - 22282.240: 99.3750% ( 4) 00:07:40.510 22282.240 - 22383.065: 99.3963% ( 3) 00:07:40.510 22383.065 - 22483.889: 99.4176% ( 3) 00:07:40.510 22483.889 - 22584.714: 99.4389% ( 3) 00:07:40.510 22584.714 - 22685.538: 99.4602% ( 3) 00:07:40.510 22685.538 - 22786.363: 99.4886% ( 4) 00:07:40.510 22786.363 - 22887.188: 99.5099% ( 3) 00:07:40.510 22887.188 - 22988.012: 99.5312% ( 3) 00:07:40.510 22988.012 - 23088.837: 99.5455% ( 2) 00:07:40.510 28230.892 - 28432.542: 99.5668% ( 3) 00:07:40.510 28432.542 - 28634.191: 99.6520% ( 12) 00:07:40.510 28634.191 - 28835.840: 99.7230% ( 10) 00:07:40.510 28835.840 - 29037.489: 99.7727% ( 7) 00:07:40.510 29037.489 - 29239.138: 99.8366% ( 9) 00:07:40.510 29239.138 - 29440.788: 99.8793% ( 6) 00:07:40.510 29440.788 - 29642.437: 99.9503% ( 10) 00:07:40.510 29642.437 - 29844.086: 99.9858% ( 5) 00:07:40.510 29844.086 - 30045.735: 100.0000% ( 2) 00:07:40.510 00:07:40.510 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:40.510 ============================================================================== 00:07:40.510 Range in us Cumulative IO count 00:07:40.510 4335.458 - 4360.665: 0.0071% ( 1) 00:07:40.510 4537.108 - 4562.314: 0.0213% ( 2) 00:07:40.510 4562.314 - 4587.520: 0.0497% ( 4) 00:07:40.510 4587.520 - 4612.726: 0.0781% ( 4) 00:07:40.510 4612.726 - 4637.932: 0.1207% ( 6) 00:07:40.510 4637.932 - 4663.138: 0.1776% ( 8) 00:07:40.510 4663.138 - 4688.345: 0.2131% ( 5) 00:07:40.510 4688.345 - 4713.551: 0.2557% ( 6) 00:07:40.510 4713.551 - 4738.757: 0.2983% ( 6) 00:07:40.510 4738.757 - 4763.963: 0.3267% ( 4) 00:07:40.510 4763.963 - 4789.169: 0.3409% ( 2) 00:07:40.510 4789.169 - 4814.375: 0.3551% ( 2) 00:07:40.510 4814.375 - 4839.582: 0.3764% ( 3) 00:07:40.510 4839.582 - 4864.788: 0.3977% ( 3) 00:07:40.510 4864.788 - 4889.994: 0.4119% ( 2) 00:07:40.510 4889.994 - 4915.200: 0.4261% ( 2) 00:07:40.510 4915.200 - 4940.406: 0.4403% ( 2) 00:07:40.510 4940.406 - 4965.612: 0.4545% ( 2) 00:07:40.510 7259.372 - 7309.785: 0.4616% ( 1) 00:07:40.510 7360.197 - 7410.609: 0.4688% ( 1) 00:07:40.510 7410.609 - 7461.022: 0.4759% ( 1) 00:07:40.510 7461.022 - 7511.434: 0.5114% ( 5) 00:07:40.510 7511.434 - 7561.846: 0.5966% ( 12) 00:07:40.510 7561.846 - 7612.258: 0.7315% ( 19) 00:07:40.510 7612.258 - 7662.671: 0.9872% ( 36) 00:07:40.510 7662.671 - 7713.083: 1.5696% ( 82) 00:07:40.510 7713.083 - 7763.495: 2.4006% ( 117) 00:07:40.510 7763.495 - 7813.908: 3.4659% ( 150) 00:07:40.510 7813.908 - 7864.320: 5.0142% ( 218) 00:07:40.510 7864.320 - 7914.732: 6.9744% ( 276) 00:07:40.510 7914.732 - 7965.145: 9.0696% ( 295) 00:07:40.510 7965.145 - 8015.557: 12.1449% ( 433) 00:07:40.510 8015.557 - 8065.969: 14.8651% ( 383) 00:07:40.510 8065.969 - 8116.382: 17.4148% ( 359) 00:07:40.510 8116.382 - 8166.794: 20.2983% ( 406) 00:07:40.510 8166.794 - 8217.206: 23.3594% ( 431) 00:07:40.510 8217.206 - 8267.618: 26.3281% ( 418) 00:07:40.510 8267.618 - 8318.031: 29.6449% ( 467) 00:07:40.510 8318.031 - 8368.443: 33.1179% ( 489) 00:07:40.510 8368.443 - 8418.855: 37.1023% ( 561) 00:07:40.510 8418.855 - 8469.268: 40.5753% ( 489) 00:07:40.510 8469.268 - 8519.680: 43.6364% ( 431) 00:07:40.510 8519.680 - 8570.092: 46.5696% ( 413) 00:07:40.510 8570.092 - 8620.505: 49.5526% ( 420) 00:07:40.510 8620.505 - 8670.917: 52.6278% ( 433) 00:07:40.510 8670.917 - 8721.329: 55.3906% ( 389) 00:07:40.510 8721.329 - 8771.742: 58.0327% ( 372) 00:07:40.510 8771.742 - 8822.154: 60.2060% ( 306) 00:07:40.510 8822.154 - 8872.566: 62.1094% ( 268) 00:07:40.510 8872.566 - 8922.978: 64.0412% ( 272) 00:07:40.510 8922.978 - 8973.391: 65.9801% ( 273) 00:07:40.510 8973.391 - 9023.803: 67.7344% ( 247) 00:07:40.510 9023.803 - 9074.215: 69.6875% ( 275) 00:07:40.510 9074.215 - 9124.628: 71.4915% ( 254) 00:07:40.510 9124.628 - 9175.040: 73.2173% ( 243) 00:07:40.510 9175.040 - 9225.452: 74.7585% ( 217) 00:07:40.510 9225.452 - 9275.865: 76.2855% ( 215) 00:07:40.510 9275.865 - 9326.277: 77.8054% ( 214) 00:07:40.510 9326.277 - 9376.689: 79.1619% ( 191) 00:07:40.510 9376.689 - 9427.102: 80.4332% ( 179) 00:07:40.510 9427.102 - 9477.514: 81.4631% ( 145) 00:07:40.510 9477.514 - 9527.926: 82.3864% ( 130) 00:07:40.510 9527.926 - 9578.338: 83.1108% ( 102) 00:07:40.510 9578.338 - 9628.751: 83.6506% ( 76) 00:07:40.510 9628.751 - 9679.163: 84.2756% ( 88) 00:07:40.510 9679.163 - 9729.575: 84.6946% ( 59) 00:07:40.510 9729.575 - 9779.988: 85.0994% ( 57) 00:07:40.510 9779.988 - 9830.400: 85.5469% ( 63) 00:07:40.510 9830.400 - 9880.812: 86.0511% ( 71) 00:07:40.510 9880.812 - 9931.225: 86.3849% ( 47) 00:07:40.510 9931.225 - 9981.637: 86.8111% ( 60) 00:07:40.510 9981.637 - 10032.049: 87.1946% ( 54) 00:07:40.510 10032.049 - 10082.462: 87.5426% ( 49) 00:07:40.510 10082.462 - 10132.874: 87.9048% ( 51) 00:07:40.510 10132.874 - 10183.286: 88.2884% ( 54) 00:07:40.510 10183.286 - 10233.698: 88.6435% ( 50) 00:07:40.510 10233.698 - 10284.111: 88.9418% ( 42) 00:07:40.510 10284.111 - 10334.523: 89.1974% ( 36) 00:07:40.510 10334.523 - 10384.935: 89.5384% ( 48) 00:07:40.510 10384.935 - 10435.348: 89.9645% ( 60) 00:07:40.510 10435.348 - 10485.760: 90.2415% ( 39) 00:07:40.510 10485.760 - 10536.172: 90.4972% ( 36) 00:07:40.510 10536.172 - 10586.585: 90.7457% ( 35) 00:07:40.510 10586.585 - 10636.997: 91.1009% ( 50) 00:07:40.510 10636.997 - 10687.409: 91.3494% ( 35) 00:07:40.510 10687.409 - 10737.822: 91.6619% ( 44) 00:07:40.510 10737.822 - 10788.234: 91.9957% ( 47) 00:07:40.510 10788.234 - 10838.646: 92.2372% ( 34) 00:07:40.510 10838.646 - 10889.058: 92.4503% ( 30) 00:07:40.510 10889.058 - 10939.471: 92.6065% ( 22) 00:07:40.510 10939.471 - 10989.883: 92.7273% ( 17) 00:07:40.510 10989.883 - 11040.295: 92.8480% ( 17) 00:07:40.510 11040.295 - 11090.708: 92.9759% ( 18) 00:07:40.510 11090.708 - 11141.120: 93.1605% ( 26) 00:07:40.510 11141.120 - 11191.532: 93.3168% ( 22) 00:07:40.510 11191.532 - 11241.945: 93.4446% ( 18) 00:07:40.511 11241.945 - 11292.357: 93.5582% ( 16) 00:07:40.511 11292.357 - 11342.769: 93.6577% ( 14) 00:07:40.511 11342.769 - 11393.182: 93.7571% ( 14) 00:07:40.511 11393.182 - 11443.594: 93.7926% ( 5) 00:07:40.511 11443.594 - 11494.006: 93.8494% ( 8) 00:07:40.511 11494.006 - 11544.418: 93.8920% ( 6) 00:07:40.511 11544.418 - 11594.831: 93.9276% ( 5) 00:07:40.511 11594.831 - 11645.243: 93.9844% ( 8) 00:07:40.511 11645.243 - 11695.655: 94.1193% ( 19) 00:07:40.511 11695.655 - 11746.068: 94.2330% ( 16) 00:07:40.511 11746.068 - 11796.480: 94.3324% ( 14) 00:07:40.511 11796.480 - 11846.892: 94.5668% ( 33) 00:07:40.511 11846.892 - 11897.305: 94.6946% ( 18) 00:07:40.511 11897.305 - 11947.717: 94.8011% ( 15) 00:07:40.511 11947.717 - 11998.129: 94.9361% ( 19) 00:07:40.511 11998.129 - 12048.542: 95.0426% ( 15) 00:07:40.511 12048.542 - 12098.954: 95.1207% ( 11) 00:07:40.511 12098.954 - 12149.366: 95.1989% ( 11) 00:07:40.511 12149.366 - 12199.778: 95.2841% ( 12) 00:07:40.511 12199.778 - 12250.191: 95.3835% ( 14) 00:07:40.511 12250.191 - 12300.603: 95.5043% ( 17) 00:07:40.511 12300.603 - 12351.015: 95.6108% ( 15) 00:07:40.511 12351.015 - 12401.428: 95.7599% ( 21) 00:07:40.511 12401.428 - 12451.840: 95.9091% ( 21) 00:07:40.511 12451.840 - 12502.252: 96.1222% ( 30) 00:07:40.511 12502.252 - 12552.665: 96.4844% ( 51) 00:07:40.511 12552.665 - 12603.077: 96.6264% ( 20) 00:07:40.511 12603.077 - 12653.489: 96.7330% ( 15) 00:07:40.511 12653.489 - 12703.902: 96.8182% ( 12) 00:07:40.511 12703.902 - 12754.314: 96.9105% ( 13) 00:07:40.511 12754.314 - 12804.726: 97.0099% ( 14) 00:07:40.511 12804.726 - 12855.138: 97.1378% ( 18) 00:07:40.511 12855.138 - 12905.551: 97.2017% ( 9) 00:07:40.511 12905.551 - 13006.375: 97.3082% ( 15) 00:07:40.511 13006.375 - 13107.200: 97.4077% ( 14) 00:07:40.511 13107.200 - 13208.025: 97.4929% ( 12) 00:07:40.511 13208.025 - 13308.849: 97.5639% ( 10) 00:07:40.511 13308.849 - 13409.674: 97.6420% ( 11) 00:07:40.511 13409.674 - 13510.498: 97.7202% ( 11) 00:07:40.511 13510.498 - 13611.323: 97.7841% ( 9) 00:07:40.511 13611.323 - 13712.148: 97.9048% ( 17) 00:07:40.511 13712.148 - 13812.972: 97.9830% ( 11) 00:07:40.511 13812.972 - 13913.797: 98.0895% ( 15) 00:07:40.511 13913.797 - 14014.622: 98.2812% ( 27) 00:07:40.511 14014.622 - 14115.446: 98.4304% ( 21) 00:07:40.511 14115.446 - 14216.271: 98.4943% ( 9) 00:07:40.511 14216.271 - 14317.095: 98.5369% ( 6) 00:07:40.511 14317.095 - 14417.920: 98.5653% ( 4) 00:07:40.511 14417.920 - 14518.745: 98.6151% ( 7) 00:07:40.511 14518.745 - 14619.569: 98.6719% ( 8) 00:07:40.511 14619.569 - 14720.394: 98.7216% ( 7) 00:07:40.511 14720.394 - 14821.218: 98.7713% ( 7) 00:07:40.511 14821.218 - 14922.043: 98.7926% ( 3) 00:07:40.511 14922.043 - 15022.868: 98.8352% ( 6) 00:07:40.511 15022.868 - 15123.692: 98.8494% ( 2) 00:07:40.511 15123.692 - 15224.517: 98.8778% ( 4) 00:07:40.511 15224.517 - 15325.342: 98.8991% ( 3) 00:07:40.511 15325.342 - 15426.166: 98.9489% ( 7) 00:07:40.511 15426.166 - 15526.991: 99.0270% ( 11) 00:07:40.511 15526.991 - 15627.815: 99.0625% ( 5) 00:07:40.511 15627.815 - 15728.640: 99.0838% ( 3) 00:07:40.511 15728.640 - 15829.465: 99.0909% ( 1) 00:07:40.511 20971.520 - 21072.345: 99.1051% ( 2) 00:07:40.511 21072.345 - 21173.169: 99.1335% ( 4) 00:07:40.511 21173.169 - 21273.994: 99.1619% ( 4) 00:07:40.511 21273.994 - 21374.818: 99.1974% ( 5) 00:07:40.511 21374.818 - 21475.643: 99.2188% ( 3) 00:07:40.511 21475.643 - 21576.468: 99.2472% ( 4) 00:07:40.511 21576.468 - 21677.292: 99.2685% ( 3) 00:07:40.511 21677.292 - 21778.117: 99.3040% ( 5) 00:07:40.511 21778.117 - 21878.942: 99.3324% ( 4) 00:07:40.511 21878.942 - 21979.766: 99.3608% ( 4) 00:07:40.511 21979.766 - 22080.591: 99.3892% ( 4) 00:07:40.511 22080.591 - 22181.415: 99.4176% ( 4) 00:07:40.511 22181.415 - 22282.240: 99.4389% ( 3) 00:07:40.511 22282.240 - 22383.065: 99.4673% ( 4) 00:07:40.511 22383.065 - 22483.889: 99.4957% ( 4) 00:07:40.511 22483.889 - 22584.714: 99.5170% ( 3) 00:07:40.511 22584.714 - 22685.538: 99.5384% ( 3) 00:07:40.511 22685.538 - 22786.363: 99.5455% ( 1) 00:07:40.511 28029.243 - 28230.892: 99.5526% ( 1) 00:07:40.511 28230.892 - 28432.542: 99.5810% ( 4) 00:07:40.511 28432.542 - 28634.191: 99.6946% ( 16) 00:07:40.511 28634.191 - 28835.840: 99.7798% ( 12) 00:07:40.511 28835.840 - 29037.489: 99.8722% ( 13) 00:07:40.511 29037.489 - 29239.138: 99.9290% ( 8) 00:07:40.511 29239.138 - 29440.788: 99.9787% ( 7) 00:07:40.511 29440.788 - 29642.437: 100.0000% ( 3) 00:07:40.511 00:07:40.511 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:40.511 ============================================================================== 00:07:40.511 Range in us Cumulative IO count 00:07:40.511 4411.077 - 4436.283: 0.0071% ( 1) 00:07:40.511 4436.283 - 4461.489: 0.0284% ( 3) 00:07:40.511 4461.489 - 4486.695: 0.0355% ( 1) 00:07:40.511 4486.695 - 4511.902: 0.1065% ( 10) 00:07:40.511 4511.902 - 4537.108: 0.2202% ( 16) 00:07:40.511 4537.108 - 4562.314: 0.3054% ( 12) 00:07:40.511 4562.314 - 4587.520: 0.3409% ( 5) 00:07:40.511 4587.520 - 4612.726: 0.3551% ( 2) 00:07:40.511 4612.726 - 4637.932: 0.3693% ( 2) 00:07:40.511 4637.932 - 4663.138: 0.3835% ( 2) 00:07:40.511 4663.138 - 4688.345: 0.3977% ( 2) 00:07:40.511 4688.345 - 4713.551: 0.4119% ( 2) 00:07:40.511 4713.551 - 4738.757: 0.4261% ( 2) 00:07:40.511 4738.757 - 4763.963: 0.4332% ( 1) 00:07:40.511 4763.963 - 4789.169: 0.4474% ( 2) 00:07:40.511 4789.169 - 4814.375: 0.4545% ( 1) 00:07:40.511 7309.785 - 7360.197: 0.4688% ( 2) 00:07:40.511 7360.197 - 7410.609: 0.5469% ( 11) 00:07:40.511 7410.609 - 7461.022: 0.6321% ( 12) 00:07:40.511 7461.022 - 7511.434: 0.8097% ( 25) 00:07:40.511 7511.434 - 7561.846: 0.9517% ( 20) 00:07:40.511 7561.846 - 7612.258: 1.0653% ( 16) 00:07:40.511 7612.258 - 7662.671: 1.4062% ( 48) 00:07:40.511 7662.671 - 7713.083: 1.9176% ( 72) 00:07:40.511 7713.083 - 7763.495: 2.7841% ( 122) 00:07:40.511 7763.495 - 7813.908: 3.9347% ( 162) 00:07:40.511 7813.908 - 7864.320: 5.2202% ( 181) 00:07:40.511 7864.320 - 7914.732: 6.9176% ( 239) 00:07:40.511 7914.732 - 7965.145: 8.8849% ( 277) 00:07:40.511 7965.145 - 8015.557: 11.4702% ( 364) 00:07:40.511 8015.557 - 8065.969: 13.8210% ( 331) 00:07:40.511 8065.969 - 8116.382: 16.4062% ( 364) 00:07:40.511 8116.382 - 8166.794: 19.5312% ( 440) 00:07:40.511 8166.794 - 8217.206: 22.7415% ( 452) 00:07:40.511 8217.206 - 8267.618: 26.2216% ( 490) 00:07:40.511 8267.618 - 8318.031: 29.4744% ( 458) 00:07:40.511 8318.031 - 8368.443: 33.4872% ( 565) 00:07:40.511 8368.443 - 8418.855: 37.0312% ( 499) 00:07:40.511 8418.855 - 8469.268: 40.8097% ( 532) 00:07:40.511 8469.268 - 8519.680: 43.9489% ( 442) 00:07:40.511 8519.680 - 8570.092: 47.0312% ( 434) 00:07:40.511 8570.092 - 8620.505: 50.1065% ( 433) 00:07:40.511 8620.505 - 8670.917: 52.5497% ( 344) 00:07:40.511 8670.917 - 8721.329: 55.2628% ( 382) 00:07:40.511 8721.329 - 8771.742: 57.5000% ( 315) 00:07:40.511 8771.742 - 8822.154: 59.4105% ( 269) 00:07:40.511 8822.154 - 8872.566: 61.7756% ( 333) 00:07:40.511 8872.566 - 8922.978: 64.0483% ( 320) 00:07:40.511 8922.978 - 8973.391: 66.2855% ( 315) 00:07:40.511 8973.391 - 9023.803: 68.3807% ( 295) 00:07:40.511 9023.803 - 9074.215: 70.4545% ( 292) 00:07:40.511 9074.215 - 9124.628: 72.3793% ( 271) 00:07:40.511 9124.628 - 9175.040: 74.2401% ( 262) 00:07:40.511 9175.040 - 9225.452: 75.8878% ( 232) 00:07:40.511 9225.452 - 9275.865: 77.4077% ( 214) 00:07:40.511 9275.865 - 9326.277: 78.8423% ( 202) 00:07:40.511 9326.277 - 9376.689: 80.1776% ( 188) 00:07:40.511 9376.689 - 9427.102: 81.1861% ( 142) 00:07:40.511 9427.102 - 9477.514: 82.0881% ( 127) 00:07:40.511 9477.514 - 9527.926: 82.8622% ( 109) 00:07:40.511 9527.926 - 9578.338: 83.6861% ( 116) 00:07:40.511 9578.338 - 9628.751: 84.3892% ( 99) 00:07:40.511 9628.751 - 9679.163: 85.0781% ( 97) 00:07:40.511 9679.163 - 9729.575: 85.7102% ( 89) 00:07:40.511 9729.575 - 9779.988: 86.1506% ( 62) 00:07:40.511 9779.988 - 9830.400: 86.5625% ( 58) 00:07:40.511 9830.400 - 9880.812: 87.0028% ( 62) 00:07:40.511 9880.812 - 9931.225: 87.4219% ( 59) 00:07:40.511 9931.225 - 9981.637: 87.7628% ( 48) 00:07:40.511 9981.637 - 10032.049: 88.1605% ( 56) 00:07:40.511 10032.049 - 10082.462: 88.4517% ( 41) 00:07:40.511 10082.462 - 10132.874: 88.7287% ( 39) 00:07:40.511 10132.874 - 10183.286: 88.9844% ( 36) 00:07:40.511 10183.286 - 10233.698: 89.2116% ( 32) 00:07:40.511 10233.698 - 10284.111: 89.4389% ( 32) 00:07:40.511 10284.111 - 10334.523: 89.6804% ( 34) 00:07:40.511 10334.523 - 10384.935: 89.9361% ( 36) 00:07:40.511 10384.935 - 10435.348: 90.2060% ( 38) 00:07:40.511 10435.348 - 10485.760: 90.3977% ( 27) 00:07:40.511 10485.760 - 10536.172: 90.5753% ( 25) 00:07:40.511 10536.172 - 10586.585: 90.8239% ( 35) 00:07:40.511 10586.585 - 10636.997: 91.0795% ( 36) 00:07:40.511 10636.997 - 10687.409: 91.2429% ( 23) 00:07:40.511 10687.409 - 10737.822: 91.3991% ( 22) 00:07:40.511 10737.822 - 10788.234: 91.5767% ( 25) 00:07:40.511 10788.234 - 10838.646: 91.6903% ( 16) 00:07:40.511 10838.646 - 10889.058: 91.8395% ( 21) 00:07:40.511 10889.058 - 10939.471: 91.9957% ( 22) 00:07:40.511 10939.471 - 10989.883: 92.1236% ( 18) 00:07:40.511 10989.883 - 11040.295: 92.2372% ( 16) 00:07:40.511 11040.295 - 11090.708: 92.3864% ( 21) 00:07:40.511 11090.708 - 11141.120: 92.5142% ( 18) 00:07:40.511 11141.120 - 11191.532: 92.7131% ( 28) 00:07:40.511 11191.532 - 11241.945: 92.8409% ( 18) 00:07:40.511 11241.945 - 11292.357: 92.9688% ( 18) 00:07:40.512 11292.357 - 11342.769: 93.0824% ( 16) 00:07:40.512 11342.769 - 11393.182: 93.4233% ( 48) 00:07:40.512 11393.182 - 11443.594: 93.5653% ( 20) 00:07:40.512 11443.594 - 11494.006: 93.7358% ( 24) 00:07:40.512 11494.006 - 11544.418: 93.8565% ( 17) 00:07:40.512 11544.418 - 11594.831: 94.0483% ( 27) 00:07:40.512 11594.831 - 11645.243: 94.2685% ( 31) 00:07:40.512 11645.243 - 11695.655: 94.4389% ( 24) 00:07:40.512 11695.655 - 11746.068: 94.5881% ( 21) 00:07:40.512 11746.068 - 11796.480: 94.7301% ( 20) 00:07:40.512 11796.480 - 11846.892: 94.8509% ( 17) 00:07:40.512 11846.892 - 11897.305: 94.9574% ( 15) 00:07:40.512 11897.305 - 11947.717: 95.0355% ( 11) 00:07:40.512 11947.717 - 11998.129: 95.1207% ( 12) 00:07:40.512 11998.129 - 12048.542: 95.1989% ( 11) 00:07:40.512 12048.542 - 12098.954: 95.2699% ( 10) 00:07:40.512 12098.954 - 12149.366: 95.3267% ( 8) 00:07:40.512 12149.366 - 12199.778: 95.4048% ( 11) 00:07:40.512 12199.778 - 12250.191: 95.4830% ( 11) 00:07:40.512 12250.191 - 12300.603: 95.5611% ( 11) 00:07:40.512 12300.603 - 12351.015: 95.6747% ( 16) 00:07:40.512 12351.015 - 12401.428: 95.7741% ( 14) 00:07:40.512 12401.428 - 12451.840: 95.8594% ( 12) 00:07:40.512 12451.840 - 12502.252: 95.9517% ( 13) 00:07:40.512 12502.252 - 12552.665: 96.0298% ( 11) 00:07:40.512 12552.665 - 12603.077: 96.1009% ( 10) 00:07:40.512 12603.077 - 12653.489: 96.1719% ( 10) 00:07:40.512 12653.489 - 12703.902: 96.2571% ( 12) 00:07:40.512 12703.902 - 12754.314: 96.3352% ( 11) 00:07:40.512 12754.314 - 12804.726: 96.4489% ( 16) 00:07:40.512 12804.726 - 12855.138: 96.6051% ( 22) 00:07:40.512 12855.138 - 12905.551: 96.8466% ( 34) 00:07:40.512 12905.551 - 13006.375: 97.2017% ( 50) 00:07:40.512 13006.375 - 13107.200: 97.5142% ( 44) 00:07:40.512 13107.200 - 13208.025: 97.7415% ( 32) 00:07:40.512 13208.025 - 13308.849: 97.8977% ( 22) 00:07:40.512 13308.849 - 13409.674: 98.0043% ( 15) 00:07:40.512 13409.674 - 13510.498: 98.0469% ( 6) 00:07:40.512 13510.498 - 13611.323: 98.0753% ( 4) 00:07:40.512 13611.323 - 13712.148: 98.0966% ( 3) 00:07:40.512 13712.148 - 13812.972: 98.1463% ( 7) 00:07:40.512 13812.972 - 13913.797: 98.2102% ( 9) 00:07:40.512 13913.797 - 14014.622: 98.2741% ( 9) 00:07:40.512 14014.622 - 14115.446: 98.3239% ( 7) 00:07:40.512 14115.446 - 14216.271: 98.3523% ( 4) 00:07:40.512 14216.271 - 14317.095: 98.3807% ( 4) 00:07:40.512 14317.095 - 14417.920: 98.4162% ( 5) 00:07:40.512 14417.920 - 14518.745: 98.5866% ( 24) 00:07:40.512 14518.745 - 14619.569: 98.6151% ( 4) 00:07:40.512 14619.569 - 14720.394: 98.6364% ( 3) 00:07:40.512 15022.868 - 15123.692: 98.6648% ( 4) 00:07:40.512 15123.692 - 15224.517: 98.6932% ( 4) 00:07:40.512 15224.517 - 15325.342: 98.7287% ( 5) 00:07:40.512 15325.342 - 15426.166: 98.7500% ( 3) 00:07:40.512 15426.166 - 15526.991: 98.7855% ( 5) 00:07:40.512 15526.991 - 15627.815: 98.8068% ( 3) 00:07:40.512 15627.815 - 15728.640: 98.8352% ( 4) 00:07:40.512 15728.640 - 15829.465: 99.0057% ( 24) 00:07:40.512 15829.465 - 15930.289: 99.0341% ( 4) 00:07:40.512 15930.289 - 16031.114: 99.0696% ( 5) 00:07:40.512 16031.114 - 16131.938: 99.0909% ( 3) 00:07:40.512 20467.397 - 20568.222: 99.0980% ( 1) 00:07:40.512 20669.046 - 20769.871: 99.1051% ( 1) 00:07:40.512 20769.871 - 20870.695: 99.1193% ( 2) 00:07:40.512 20870.695 - 20971.520: 99.1477% ( 4) 00:07:40.512 20971.520 - 21072.345: 99.1832% ( 5) 00:07:40.512 21072.345 - 21173.169: 99.2188% ( 5) 00:07:40.512 21173.169 - 21273.994: 99.2543% ( 5) 00:07:40.512 21273.994 - 21374.818: 99.2827% ( 4) 00:07:40.512 21374.818 - 21475.643: 99.3182% ( 5) 00:07:40.512 21475.643 - 21576.468: 99.3466% ( 4) 00:07:40.512 21576.468 - 21677.292: 99.3750% ( 4) 00:07:40.512 21677.292 - 21778.117: 99.4034% ( 4) 00:07:40.512 21778.117 - 21878.942: 99.4247% ( 3) 00:07:40.512 21878.942 - 21979.766: 99.4460% ( 3) 00:07:40.512 21979.766 - 22080.591: 99.4673% ( 3) 00:07:40.512 22080.591 - 22181.415: 99.4957% ( 4) 00:07:40.512 22181.415 - 22282.240: 99.5099% ( 2) 00:07:40.512 22282.240 - 22383.065: 99.5384% ( 4) 00:07:40.512 22383.065 - 22483.889: 99.5455% ( 1) 00:07:40.512 27625.945 - 27827.594: 99.5526% ( 1) 00:07:40.512 27827.594 - 28029.243: 99.6449% ( 13) 00:07:40.512 28029.243 - 28230.892: 99.7656% ( 17) 00:07:40.512 28230.892 - 28432.542: 99.7727% ( 1) 00:07:40.512 28432.542 - 28634.191: 99.8082% ( 5) 00:07:40.512 28634.191 - 28835.840: 99.8651% ( 8) 00:07:40.512 28835.840 - 29037.489: 99.9432% ( 11) 00:07:40.512 29037.489 - 29239.138: 100.0000% ( 8) 00:07:40.512 00:07:40.512 09:25:07 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:07:40.512 00:07:40.512 real 0m2.501s 00:07:40.512 user 0m2.195s 00:07:40.512 sys 0m0.197s 00:07:40.512 09:25:07 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.512 09:25:07 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:07:40.512 ************************************ 00:07:40.512 END TEST nvme_perf 00:07:40.512 ************************************ 00:07:40.512 09:25:07 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:40.512 09:25:07 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:40.512 09:25:07 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.512 09:25:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:40.512 ************************************ 00:07:40.512 START TEST nvme_hello_world 00:07:40.512 ************************************ 00:07:40.512 09:25:07 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:40.512 Initializing NVMe Controllers 00:07:40.512 Attached to 0000:00:10.0 00:07:40.512 Namespace ID: 1 size: 6GB 00:07:40.512 Attached to 0000:00:11.0 00:07:40.512 Namespace ID: 1 size: 5GB 00:07:40.512 Attached to 0000:00:13.0 00:07:40.512 Namespace ID: 1 size: 1GB 00:07:40.512 Attached to 0000:00:12.0 00:07:40.512 Namespace ID: 1 size: 4GB 00:07:40.512 Namespace ID: 2 size: 4GB 00:07:40.512 Namespace ID: 3 size: 4GB 00:07:40.512 Initialization complete. 00:07:40.512 INFO: using host memory buffer for IO 00:07:40.512 Hello world! 00:07:40.512 INFO: using host memory buffer for IO 00:07:40.512 Hello world! 00:07:40.512 INFO: using host memory buffer for IO 00:07:40.512 Hello world! 00:07:40.512 INFO: using host memory buffer for IO 00:07:40.512 Hello world! 00:07:40.512 INFO: using host memory buffer for IO 00:07:40.512 Hello world! 00:07:40.512 INFO: using host memory buffer for IO 00:07:40.512 Hello world! 00:07:40.512 00:07:40.512 real 0m0.225s 00:07:40.512 user 0m0.071s 00:07:40.512 sys 0m0.100s 00:07:40.512 09:25:08 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.512 09:25:08 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:40.512 ************************************ 00:07:40.512 END TEST nvme_hello_world 00:07:40.512 ************************************ 00:07:40.512 09:25:08 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:40.512 09:25:08 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:40.512 09:25:08 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.512 09:25:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:40.512 ************************************ 00:07:40.512 START TEST nvme_sgl 00:07:40.512 ************************************ 00:07:40.512 09:25:08 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:40.771 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:07:40.771 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:07:40.771 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:07:40.771 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:07:40.771 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:07:40.771 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:07:40.771 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:07:40.771 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:07:40.771 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:07:40.771 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:07:40.771 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:07:40.771 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:07:40.771 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:07:40.771 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:07:40.771 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:07:40.771 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:07:40.771 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:07:40.771 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:07:40.771 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:07:40.771 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:07:40.771 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:07:40.771 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:07:40.771 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:07:40.771 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:07:40.771 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:07:40.771 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:07:40.771 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:07:40.771 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:07:40.771 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:07:40.771 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:07:40.771 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:07:40.771 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:07:40.771 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:07:40.771 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:07:40.771 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:07:40.771 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:07:40.771 NVMe Readv/Writev Request test 00:07:40.771 Attached to 0000:00:10.0 00:07:40.771 Attached to 0000:00:11.0 00:07:40.771 Attached to 0000:00:13.0 00:07:40.771 Attached to 0000:00:12.0 00:07:40.771 0000:00:10.0: build_io_request_2 test passed 00:07:40.771 0000:00:10.0: build_io_request_4 test passed 00:07:40.771 0000:00:10.0: build_io_request_5 test passed 00:07:40.771 0000:00:10.0: build_io_request_6 test passed 00:07:40.771 0000:00:10.0: build_io_request_7 test passed 00:07:40.771 0000:00:10.0: build_io_request_10 test passed 00:07:40.771 0000:00:11.0: build_io_request_2 test passed 00:07:40.771 0000:00:11.0: build_io_request_4 test passed 00:07:40.771 0000:00:11.0: build_io_request_5 test passed 00:07:40.771 0000:00:11.0: build_io_request_6 test passed 00:07:40.771 0000:00:11.0: build_io_request_7 test passed 00:07:40.771 0000:00:11.0: build_io_request_10 test passed 00:07:40.771 Cleaning up... 00:07:40.771 00:07:40.771 real 0m0.266s 00:07:40.771 user 0m0.123s 00:07:40.771 sys 0m0.096s 00:07:40.771 09:25:08 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.771 09:25:08 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:07:40.771 ************************************ 00:07:40.771 END TEST nvme_sgl 00:07:40.771 ************************************ 00:07:40.771 09:25:08 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:40.771 09:25:08 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:40.771 09:25:08 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.771 09:25:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:40.772 ************************************ 00:07:40.772 START TEST nvme_e2edp 00:07:40.772 ************************************ 00:07:40.772 09:25:08 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:41.030 NVMe Write/Read with End-to-End data protection test 00:07:41.030 Attached to 0000:00:10.0 00:07:41.030 Attached to 0000:00:11.0 00:07:41.030 Attached to 0000:00:13.0 00:07:41.030 Attached to 0000:00:12.0 00:07:41.030 Cleaning up... 00:07:41.030 00:07:41.030 real 0m0.209s 00:07:41.030 user 0m0.072s 00:07:41.030 sys 0m0.092s 00:07:41.030 09:25:08 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.030 09:25:08 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:07:41.030 ************************************ 00:07:41.030 END TEST nvme_e2edp 00:07:41.030 ************************************ 00:07:41.030 09:25:08 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:41.030 09:25:08 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:41.030 09:25:08 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.030 09:25:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:41.030 ************************************ 00:07:41.030 START TEST nvme_reserve 00:07:41.030 ************************************ 00:07:41.030 09:25:08 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:41.288 ===================================================== 00:07:41.288 NVMe Controller at PCI bus 0, device 16, function 0 00:07:41.288 ===================================================== 00:07:41.288 Reservations: Not Supported 00:07:41.288 ===================================================== 00:07:41.288 NVMe Controller at PCI bus 0, device 17, function 0 00:07:41.288 ===================================================== 00:07:41.288 Reservations: Not Supported 00:07:41.288 ===================================================== 00:07:41.288 NVMe Controller at PCI bus 0, device 19, function 0 00:07:41.288 ===================================================== 00:07:41.288 Reservations: Not Supported 00:07:41.288 ===================================================== 00:07:41.288 NVMe Controller at PCI bus 0, device 18, function 0 00:07:41.288 ===================================================== 00:07:41.288 Reservations: Not Supported 00:07:41.288 Reservation test passed 00:07:41.288 ************************************ 00:07:41.288 END TEST nvme_reserve 00:07:41.288 ************************************ 00:07:41.288 00:07:41.288 real 0m0.202s 00:07:41.288 user 0m0.065s 00:07:41.288 sys 0m0.092s 00:07:41.288 09:25:08 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.288 09:25:08 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:07:41.288 09:25:08 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:41.288 09:25:08 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:41.288 09:25:08 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.288 09:25:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:41.288 ************************************ 00:07:41.288 START TEST nvme_err_injection 00:07:41.288 ************************************ 00:07:41.288 09:25:08 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:41.546 NVMe Error Injection test 00:07:41.546 Attached to 0000:00:10.0 00:07:41.546 Attached to 0000:00:11.0 00:07:41.546 Attached to 0000:00:13.0 00:07:41.546 Attached to 0000:00:12.0 00:07:41.546 0000:00:10.0: get features failed as expected 00:07:41.546 0000:00:11.0: get features failed as expected 00:07:41.546 0000:00:13.0: get features failed as expected 00:07:41.546 0000:00:12.0: get features failed as expected 00:07:41.546 0000:00:10.0: get features successfully as expected 00:07:41.546 0000:00:11.0: get features successfully as expected 00:07:41.546 0000:00:13.0: get features successfully as expected 00:07:41.546 0000:00:12.0: get features successfully as expected 00:07:41.546 0000:00:11.0: read failed as expected 00:07:41.546 0000:00:13.0: read failed as expected 00:07:41.546 0000:00:12.0: read failed as expected 00:07:41.546 0000:00:10.0: read failed as expected 00:07:41.546 0000:00:11.0: read successfully as expected 00:07:41.546 0000:00:13.0: read successfully as expected 00:07:41.546 0000:00:12.0: read successfully as expected 00:07:41.546 0000:00:10.0: read successfully as expected 00:07:41.546 Cleaning up... 00:07:41.546 00:07:41.546 real 0m0.214s 00:07:41.546 user 0m0.072s 00:07:41.546 sys 0m0.099s 00:07:41.546 09:25:09 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.546 09:25:09 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:07:41.546 ************************************ 00:07:41.546 END TEST nvme_err_injection 00:07:41.546 ************************************ 00:07:41.546 09:25:09 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:41.546 09:25:09 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:07:41.546 09:25:09 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.546 09:25:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:41.546 ************************************ 00:07:41.546 START TEST nvme_overhead 00:07:41.546 ************************************ 00:07:41.546 09:25:09 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:42.920 Initializing NVMe Controllers 00:07:42.920 Attached to 0000:00:10.0 00:07:42.920 Attached to 0000:00:11.0 00:07:42.920 Attached to 0000:00:13.0 00:07:42.920 Attached to 0000:00:12.0 00:07:42.920 Initialization complete. Launching workers. 00:07:42.920 submit (in ns) avg, min, max = 12325.3, 10676.2, 78381.5 00:07:42.920 complete (in ns) avg, min, max = 7700.9, 7252.3, 997600.0 00:07:42.920 00:07:42.920 Submit histogram 00:07:42.920 ================ 00:07:42.920 Range in us Cumulative Count 00:07:42.920 10.634 - 10.683: 0.0062% ( 1) 00:07:42.920 11.520 - 11.569: 0.0247% ( 3) 00:07:42.920 11.569 - 11.618: 0.1980% ( 28) 00:07:42.920 11.618 - 11.668: 0.9219% ( 117) 00:07:42.920 11.668 - 11.717: 3.4092% ( 402) 00:07:42.920 11.717 - 11.766: 8.0188% ( 745) 00:07:42.920 11.766 - 11.815: 15.0291% ( 1133) 00:07:42.920 11.815 - 11.865: 23.7594% ( 1411) 00:07:42.920 11.865 - 11.914: 32.7435% ( 1452) 00:07:42.920 11.914 - 11.963: 41.3191% ( 1386) 00:07:42.920 11.963 - 12.012: 48.2490% ( 1120) 00:07:42.920 12.012 - 12.062: 53.9413% ( 920) 00:07:42.920 12.062 - 12.111: 57.9074% ( 641) 00:07:42.920 12.111 - 12.160: 61.1929% ( 531) 00:07:42.920 12.160 - 12.209: 63.7792% ( 418) 00:07:42.920 12.209 - 12.258: 66.0191% ( 362) 00:07:42.920 12.258 - 12.308: 68.1723% ( 348) 00:07:42.920 12.308 - 12.357: 70.1213% ( 315) 00:07:42.920 12.357 - 12.406: 72.2497% ( 344) 00:07:42.920 12.406 - 12.455: 74.4400% ( 354) 00:07:42.920 12.455 - 12.505: 76.6056% ( 350) 00:07:42.920 12.505 - 12.554: 78.9259% ( 375) 00:07:42.920 12.554 - 12.603: 81.0234% ( 339) 00:07:42.920 12.603 - 12.702: 84.8224% ( 614) 00:07:42.920 12.702 - 12.800: 88.0151% ( 516) 00:07:42.920 12.800 - 12.898: 90.7870% ( 448) 00:07:42.920 12.898 - 12.997: 93.0392% ( 364) 00:07:42.920 12.997 - 13.095: 94.3571% ( 213) 00:07:42.920 13.095 - 13.194: 95.2172% ( 139) 00:07:42.920 13.194 - 13.292: 95.6689% ( 73) 00:07:42.920 13.292 - 13.391: 95.9844% ( 51) 00:07:42.920 13.391 - 13.489: 96.1824% ( 32) 00:07:42.920 13.489 - 13.588: 96.3618% ( 29) 00:07:42.920 13.588 - 13.686: 96.4794% ( 19) 00:07:42.920 13.686 - 13.785: 96.6403% ( 26) 00:07:42.920 13.785 - 13.883: 96.8444% ( 33) 00:07:42.920 13.883 - 13.982: 97.0239% ( 29) 00:07:42.921 13.982 - 14.080: 97.1909% ( 27) 00:07:42.921 14.080 - 14.178: 97.3209% ( 21) 00:07:42.921 14.178 - 14.277: 97.4756% ( 25) 00:07:42.921 14.277 - 14.375: 97.5374% ( 10) 00:07:42.921 14.375 - 14.474: 97.5931% ( 9) 00:07:42.921 14.474 - 14.572: 97.6612% ( 11) 00:07:42.921 14.572 - 14.671: 97.7478% ( 14) 00:07:42.921 14.671 - 14.769: 97.7973% ( 8) 00:07:42.921 14.769 - 14.868: 97.8654% ( 11) 00:07:42.921 14.868 - 14.966: 97.8901% ( 4) 00:07:42.921 14.966 - 15.065: 97.9149% ( 4) 00:07:42.921 15.065 - 15.163: 97.9334% ( 3) 00:07:42.921 15.163 - 15.262: 97.9644% ( 5) 00:07:42.921 15.262 - 15.360: 97.9829% ( 3) 00:07:42.921 15.360 - 15.458: 97.9891% ( 1) 00:07:42.921 15.458 - 15.557: 97.9953% ( 1) 00:07:42.921 15.557 - 15.655: 98.0324% ( 6) 00:07:42.921 15.655 - 15.754: 98.0572% ( 4) 00:07:42.921 15.754 - 15.852: 98.0881% ( 5) 00:07:42.921 15.852 - 15.951: 98.1500% ( 10) 00:07:42.921 15.951 - 16.049: 98.1809% ( 5) 00:07:42.921 16.049 - 16.148: 98.2119% ( 5) 00:07:42.921 16.148 - 16.246: 98.2242% ( 2) 00:07:42.921 16.246 - 16.345: 98.2490% ( 4) 00:07:42.921 16.345 - 16.443: 98.2675% ( 3) 00:07:42.921 16.443 - 16.542: 98.3047% ( 6) 00:07:42.921 16.542 - 16.640: 98.3170% ( 2) 00:07:42.921 16.640 - 16.738: 98.3480% ( 5) 00:07:42.921 16.837 - 16.935: 98.3665% ( 3) 00:07:42.921 16.935 - 17.034: 98.3727% ( 1) 00:07:42.921 17.034 - 17.132: 98.3913% ( 3) 00:07:42.921 17.132 - 17.231: 98.4160% ( 4) 00:07:42.921 17.231 - 17.329: 98.4470% ( 5) 00:07:42.921 17.329 - 17.428: 98.4593% ( 2) 00:07:42.921 17.428 - 17.526: 98.4903% ( 5) 00:07:42.921 17.526 - 17.625: 98.5583% ( 11) 00:07:42.921 17.625 - 17.723: 98.6264% ( 11) 00:07:42.921 17.723 - 17.822: 98.7068% ( 13) 00:07:42.921 17.822 - 17.920: 98.7997% ( 15) 00:07:42.921 17.920 - 18.018: 98.8677% ( 11) 00:07:42.921 18.018 - 18.117: 98.9234% ( 9) 00:07:42.921 18.117 - 18.215: 98.9420% ( 3) 00:07:42.921 18.215 - 18.314: 98.9976% ( 9) 00:07:42.921 18.314 - 18.412: 99.0595% ( 10) 00:07:42.921 18.412 - 18.511: 99.1461% ( 14) 00:07:42.921 18.511 - 18.609: 99.2266% ( 13) 00:07:42.921 18.609 - 18.708: 99.2575% ( 5) 00:07:42.921 18.708 - 18.806: 99.3008% ( 7) 00:07:42.921 18.806 - 18.905: 99.3380% ( 6) 00:07:42.921 18.905 - 19.003: 99.3936% ( 9) 00:07:42.921 19.003 - 19.102: 99.4308% ( 6) 00:07:42.921 19.102 - 19.200: 99.4741% ( 7) 00:07:42.921 19.200 - 19.298: 99.5669% ( 15) 00:07:42.921 19.298 - 19.397: 99.5854% ( 3) 00:07:42.921 19.397 - 19.495: 99.6040% ( 3) 00:07:42.921 19.495 - 19.594: 99.6349% ( 5) 00:07:42.921 19.594 - 19.692: 99.6659% ( 5) 00:07:42.921 19.692 - 19.791: 99.6968% ( 5) 00:07:42.921 19.791 - 19.889: 99.7030% ( 1) 00:07:42.921 19.889 - 19.988: 99.7278% ( 4) 00:07:42.921 19.988 - 20.086: 99.7525% ( 4) 00:07:42.921 20.283 - 20.382: 99.7587% ( 1) 00:07:42.921 20.382 - 20.480: 99.7649% ( 1) 00:07:42.921 20.480 - 20.578: 99.7711% ( 1) 00:07:42.921 20.578 - 20.677: 99.7773% ( 1) 00:07:42.921 20.874 - 20.972: 99.7896% ( 2) 00:07:42.921 21.071 - 21.169: 99.7958% ( 1) 00:07:42.921 21.169 - 21.268: 99.8020% ( 1) 00:07:42.921 21.268 - 21.366: 99.8082% ( 1) 00:07:42.921 21.465 - 21.563: 99.8206% ( 2) 00:07:42.921 21.563 - 21.662: 99.8268% ( 1) 00:07:42.921 21.662 - 21.760: 99.8329% ( 1) 00:07:42.921 21.858 - 21.957: 99.8391% ( 1) 00:07:42.921 21.957 - 22.055: 99.8515% ( 2) 00:07:42.921 22.055 - 22.154: 99.8577% ( 1) 00:07:42.921 22.449 - 22.548: 99.8639% ( 1) 00:07:42.921 22.942 - 23.040: 99.8701% ( 1) 00:07:42.921 23.335 - 23.434: 99.8763% ( 1) 00:07:42.921 24.320 - 24.418: 99.8824% ( 1) 00:07:42.921 24.418 - 24.517: 99.8886% ( 1) 00:07:42.921 24.517 - 24.615: 99.8948% ( 1) 00:07:42.921 24.714 - 24.812: 99.9010% ( 1) 00:07:42.921 24.812 - 24.911: 99.9072% ( 1) 00:07:42.921 25.403 - 25.600: 99.9134% ( 1) 00:07:42.921 26.782 - 26.978: 99.9196% ( 1) 00:07:42.921 26.978 - 27.175: 99.9319% ( 2) 00:07:42.921 27.372 - 27.569: 99.9381% ( 1) 00:07:42.921 27.963 - 28.160: 99.9443% ( 1) 00:07:42.921 28.357 - 28.554: 99.9505% ( 1) 00:07:42.921 30.326 - 30.523: 99.9567% ( 1) 00:07:42.921 30.523 - 30.720: 99.9629% ( 1) 00:07:42.921 30.917 - 31.114: 99.9691% ( 1) 00:07:42.921 33.280 - 33.477: 99.9753% ( 1) 00:07:42.921 34.265 - 34.462: 99.9814% ( 1) 00:07:42.921 37.218 - 37.415: 99.9876% ( 1) 00:07:42.921 39.975 - 40.172: 99.9938% ( 1) 00:07:42.921 78.375 - 78.769: 100.0000% ( 1) 00:07:42.921 00:07:42.921 Complete histogram 00:07:42.921 ================== 00:07:42.921 Range in us Cumulative Count 00:07:42.921 7.237 - 7.286: 0.0433% ( 7) 00:07:42.921 7.286 - 7.335: 0.4146% ( 60) 00:07:42.921 7.335 - 7.385: 4.3435% ( 635) 00:07:42.921 7.385 - 7.434: 16.7677% ( 2008) 00:07:42.921 7.434 - 7.483: 37.2046% ( 3303) 00:07:42.921 7.483 - 7.532: 57.7961% ( 3328) 00:07:42.921 7.532 - 7.582: 72.8437% ( 2432) 00:07:42.921 7.582 - 7.631: 83.0528% ( 1650) 00:07:42.921 7.631 - 7.680: 89.5372% ( 1048) 00:07:42.921 7.680 - 7.729: 93.4105% ( 626) 00:07:42.921 7.729 - 7.778: 95.5513% ( 346) 00:07:42.921 7.778 - 7.828: 96.7640% ( 196) 00:07:42.921 7.828 - 7.877: 97.4075% ( 104) 00:07:42.921 7.877 - 7.926: 97.7292% ( 52) 00:07:42.921 7.926 - 7.975: 97.9582% ( 37) 00:07:42.921 7.975 - 8.025: 98.0819% ( 20) 00:07:42.921 8.025 - 8.074: 98.1562% ( 12) 00:07:42.921 8.074 - 8.123: 98.1871% ( 5) 00:07:42.921 8.123 - 8.172: 98.2428% ( 9) 00:07:42.921 8.172 - 8.222: 98.2799% ( 6) 00:07:42.921 8.222 - 8.271: 98.2985% ( 3) 00:07:42.921 8.271 - 8.320: 98.3047% ( 1) 00:07:42.921 8.418 - 8.468: 98.3170% ( 2) 00:07:42.921 8.468 - 8.517: 98.3232% ( 1) 00:07:42.921 8.615 - 8.665: 98.3294% ( 1) 00:07:42.921 8.665 - 8.714: 98.3480% ( 3) 00:07:42.921 8.812 - 8.862: 98.3542% ( 1) 00:07:42.921 9.009 - 9.058: 98.3665% ( 2) 00:07:42.921 9.108 - 9.157: 98.3727% ( 1) 00:07:42.921 9.502 - 9.551: 98.3851% ( 2) 00:07:42.921 9.551 - 9.600: 98.3913% ( 1) 00:07:42.921 9.698 - 9.748: 98.3975% ( 1) 00:07:42.921 9.748 - 9.797: 98.4037% ( 1) 00:07:42.921 9.846 - 9.895: 98.4099% ( 1) 00:07:42.921 10.092 - 10.142: 98.4160% ( 1) 00:07:42.921 10.338 - 10.388: 98.4222% ( 1) 00:07:42.921 10.388 - 10.437: 98.4346% ( 2) 00:07:42.921 10.486 - 10.535: 98.4470% ( 2) 00:07:42.921 10.634 - 10.683: 98.4593% ( 2) 00:07:42.921 10.683 - 10.732: 98.4655% ( 1) 00:07:42.921 10.732 - 10.782: 98.4717% ( 1) 00:07:42.921 10.831 - 10.880: 98.4779% ( 1) 00:07:42.921 10.929 - 10.978: 98.4903% ( 2) 00:07:42.921 10.978 - 11.028: 98.5088% ( 3) 00:07:42.921 11.028 - 11.077: 98.5150% ( 1) 00:07:42.921 11.126 - 11.175: 98.5274% ( 2) 00:07:42.921 11.372 - 11.422: 98.5398% ( 2) 00:07:42.921 11.422 - 11.471: 98.5522% ( 2) 00:07:42.921 11.471 - 11.520: 98.5583% ( 1) 00:07:42.921 11.569 - 11.618: 98.5645% ( 1) 00:07:42.921 11.618 - 11.668: 98.5769% ( 2) 00:07:42.921 12.111 - 12.160: 98.5893% ( 2) 00:07:42.921 12.160 - 12.209: 98.5955% ( 1) 00:07:42.921 12.258 - 12.308: 98.6017% ( 1) 00:07:42.921 12.308 - 12.357: 98.6078% ( 1) 00:07:42.921 12.505 - 12.554: 98.6140% ( 1) 00:07:42.921 12.603 - 12.702: 98.6264% ( 2) 00:07:42.921 12.702 - 12.800: 98.6388% ( 2) 00:07:42.921 12.898 - 12.997: 98.6450% ( 1) 00:07:42.921 13.095 - 13.194: 98.6821% ( 6) 00:07:42.921 13.194 - 13.292: 98.7254% ( 7) 00:07:42.921 13.292 - 13.391: 98.7811% ( 9) 00:07:42.921 13.391 - 13.489: 98.8368% ( 9) 00:07:42.921 13.489 - 13.588: 98.9048% ( 11) 00:07:42.921 13.588 - 13.686: 98.9543% ( 8) 00:07:42.921 13.686 - 13.785: 98.9976% ( 7) 00:07:42.921 13.785 - 13.883: 99.0905% ( 15) 00:07:42.921 13.883 - 13.982: 99.1771% ( 14) 00:07:42.921 13.982 - 14.080: 99.2513% ( 12) 00:07:42.921 14.080 - 14.178: 99.3318% ( 13) 00:07:42.921 14.178 - 14.277: 99.3689% ( 6) 00:07:42.921 14.277 - 14.375: 99.4679% ( 16) 00:07:42.921 14.375 - 14.474: 99.5298% ( 10) 00:07:42.921 14.474 - 14.572: 99.5854% ( 9) 00:07:42.921 14.572 - 14.671: 99.6226% ( 6) 00:07:42.921 14.671 - 14.769: 99.6597% ( 6) 00:07:42.921 14.769 - 14.868: 99.6783% ( 3) 00:07:42.921 14.868 - 14.966: 99.7154% ( 6) 00:07:42.921 14.966 - 15.065: 99.7339% ( 3) 00:07:42.921 15.065 - 15.163: 99.7463% ( 2) 00:07:42.921 15.163 - 15.262: 99.7649% ( 3) 00:07:42.921 15.262 - 15.360: 99.7711% ( 1) 00:07:42.921 15.458 - 15.557: 99.7773% ( 1) 00:07:42.921 15.557 - 15.655: 99.7834% ( 1) 00:07:42.921 15.754 - 15.852: 99.7896% ( 1) 00:07:42.921 15.852 - 15.951: 99.8020% ( 2) 00:07:42.921 15.951 - 16.049: 99.8144% ( 2) 00:07:42.921 16.049 - 16.148: 99.8206% ( 1) 00:07:42.921 16.246 - 16.345: 99.8268% ( 1) 00:07:42.921 16.345 - 16.443: 99.8329% ( 1) 00:07:42.922 16.542 - 16.640: 99.8391% ( 1) 00:07:42.922 16.738 - 16.837: 99.8515% ( 2) 00:07:42.922 16.837 - 16.935: 99.8577% ( 1) 00:07:42.922 17.132 - 17.231: 99.8701% ( 2) 00:07:42.922 17.428 - 17.526: 99.8763% ( 1) 00:07:42.922 17.625 - 17.723: 99.8948% ( 3) 00:07:42.922 17.822 - 17.920: 99.9010% ( 1) 00:07:42.922 18.018 - 18.117: 99.9134% ( 2) 00:07:42.922 18.314 - 18.412: 99.9196% ( 1) 00:07:42.922 18.806 - 18.905: 99.9258% ( 1) 00:07:42.922 19.102 - 19.200: 99.9319% ( 1) 00:07:42.922 19.200 - 19.298: 99.9381% ( 1) 00:07:42.922 19.791 - 19.889: 99.9443% ( 1) 00:07:42.922 20.086 - 20.185: 99.9505% ( 1) 00:07:42.922 21.268 - 21.366: 99.9567% ( 1) 00:07:42.922 21.563 - 21.662: 99.9629% ( 1) 00:07:42.922 23.237 - 23.335: 99.9691% ( 1) 00:07:42.922 23.532 - 23.631: 99.9753% ( 1) 00:07:42.922 30.523 - 30.720: 99.9814% ( 1) 00:07:42.922 39.385 - 39.582: 99.9876% ( 1) 00:07:42.922 41.354 - 41.551: 99.9938% ( 1) 00:07:42.922 995.643 - 1001.945: 100.0000% ( 1) 00:07:42.922 00:07:42.922 ************************************ 00:07:42.922 END TEST nvme_overhead 00:07:42.922 ************************************ 00:07:42.922 00:07:42.922 real 0m1.211s 00:07:42.922 user 0m1.068s 00:07:42.922 sys 0m0.092s 00:07:42.922 09:25:10 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.922 09:25:10 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:07:42.922 09:25:10 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:42.922 09:25:10 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:42.922 09:25:10 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.922 09:25:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:42.922 ************************************ 00:07:42.922 START TEST nvme_arbitration 00:07:42.922 ************************************ 00:07:42.922 09:25:10 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:46.221 Initializing NVMe Controllers 00:07:46.221 Attached to 0000:00:10.0 00:07:46.221 Attached to 0000:00:11.0 00:07:46.221 Attached to 0000:00:13.0 00:07:46.221 Attached to 0000:00:12.0 00:07:46.221 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:07:46.221 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:07:46.221 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:07:46.221 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:07:46.221 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:07:46.221 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:07:46.221 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:07:46.221 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:07:46.221 Initialization complete. Launching workers. 00:07:46.221 Starting thread on core 1 with urgent priority queue 00:07:46.221 Starting thread on core 2 with urgent priority queue 00:07:46.221 Starting thread on core 3 with urgent priority queue 00:07:46.221 Starting thread on core 0 with urgent priority queue 00:07:46.221 QEMU NVMe Ctrl (12340 ) core 0: 6250.67 IO/s 16.00 secs/100000 ios 00:07:46.221 QEMU NVMe Ctrl (12342 ) core 0: 6250.67 IO/s 16.00 secs/100000 ios 00:07:46.221 QEMU NVMe Ctrl (12341 ) core 1: 5717.33 IO/s 17.49 secs/100000 ios 00:07:46.221 QEMU NVMe Ctrl (12342 ) core 1: 5717.33 IO/s 17.49 secs/100000 ios 00:07:46.221 QEMU NVMe Ctrl (12343 ) core 2: 5738.67 IO/s 17.43 secs/100000 ios 00:07:46.221 QEMU NVMe Ctrl (12342 ) core 3: 5162.67 IO/s 19.37 secs/100000 ios 00:07:46.221 ======================================================== 00:07:46.221 00:07:46.221 ************************************ 00:07:46.221 END TEST nvme_arbitration 00:07:46.221 ************************************ 00:07:46.221 00:07:46.221 real 0m3.245s 00:07:46.221 user 0m9.051s 00:07:46.221 sys 0m0.115s 00:07:46.221 09:25:13 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:46.221 09:25:13 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:07:46.221 09:25:13 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:46.221 09:25:13 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:46.221 09:25:13 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.221 09:25:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:46.221 ************************************ 00:07:46.221 START TEST nvme_single_aen 00:07:46.221 ************************************ 00:07:46.221 09:25:13 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:46.482 Asynchronous Event Request test 00:07:46.482 Attached to 0000:00:10.0 00:07:46.482 Attached to 0000:00:11.0 00:07:46.482 Attached to 0000:00:13.0 00:07:46.482 Attached to 0000:00:12.0 00:07:46.482 Reset controller to setup AER completions for this process 00:07:46.482 Registering asynchronous event callbacks... 00:07:46.482 Getting orig temperature thresholds of all controllers 00:07:46.482 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:46.482 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:46.482 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:46.482 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:46.482 Setting all controllers temperature threshold low to trigger AER 00:07:46.482 Waiting for all controllers temperature threshold to be set lower 00:07:46.482 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:46.482 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:07:46.482 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:46.482 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:07:46.482 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:46.482 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:07:46.482 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:46.482 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:07:46.482 Waiting for all controllers to trigger AER and reset threshold 00:07:46.482 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:46.482 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:46.482 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:46.482 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:46.482 Cleaning up... 00:07:46.482 ************************************ 00:07:46.482 END TEST nvme_single_aen 00:07:46.482 ************************************ 00:07:46.482 00:07:46.482 real 0m0.215s 00:07:46.482 user 0m0.074s 00:07:46.482 sys 0m0.099s 00:07:46.482 09:25:13 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:46.482 09:25:13 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:07:46.482 09:25:13 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:07:46.482 09:25:13 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:46.482 09:25:13 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.482 09:25:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:46.482 ************************************ 00:07:46.482 START TEST nvme_doorbell_aers 00:07:46.482 ************************************ 00:07:46.482 09:25:14 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:07:46.482 09:25:14 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:07:46.482 09:25:14 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:07:46.482 09:25:14 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:07:46.482 09:25:14 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:07:46.482 09:25:14 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:46.482 09:25:14 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:07:46.482 09:25:14 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:46.482 09:25:14 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:46.482 09:25:14 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:46.482 09:25:14 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:46.482 09:25:14 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:46.482 09:25:14 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:46.482 09:25:14 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:07:46.744 [2024-11-29 09:25:14.288890] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76601) is not found. Dropping the request. 00:07:56.735 Executing: test_write_invalid_db 00:07:56.735 Waiting for AER completion... 00:07:56.735 Failure: test_write_invalid_db 00:07:56.735 00:07:56.735 Executing: test_invalid_db_write_overflow_sq 00:07:56.735 Waiting for AER completion... 00:07:56.736 Failure: test_invalid_db_write_overflow_sq 00:07:56.736 00:07:56.736 Executing: test_invalid_db_write_overflow_cq 00:07:56.736 Waiting for AER completion... 00:07:56.736 Failure: test_invalid_db_write_overflow_cq 00:07:56.736 00:07:56.736 09:25:24 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:56.736 09:25:24 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:07:56.736 [2024-11-29 09:25:24.306212] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76601) is not found. Dropping the request. 00:08:06.706 Executing: test_write_invalid_db 00:08:06.706 Waiting for AER completion... 00:08:06.707 Failure: test_write_invalid_db 00:08:06.707 00:08:06.707 Executing: test_invalid_db_write_overflow_sq 00:08:06.707 Waiting for AER completion... 00:08:06.707 Failure: test_invalid_db_write_overflow_sq 00:08:06.707 00:08:06.707 Executing: test_invalid_db_write_overflow_cq 00:08:06.707 Waiting for AER completion... 00:08:06.707 Failure: test_invalid_db_write_overflow_cq 00:08:06.707 00:08:06.707 09:25:34 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:06.707 09:25:34 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:06.707 [2024-11-29 09:25:34.349367] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76601) is not found. Dropping the request. 00:08:16.714 Executing: test_write_invalid_db 00:08:16.714 Waiting for AER completion... 00:08:16.714 Failure: test_write_invalid_db 00:08:16.714 00:08:16.714 Executing: test_invalid_db_write_overflow_sq 00:08:16.714 Waiting for AER completion... 00:08:16.714 Failure: test_invalid_db_write_overflow_sq 00:08:16.714 00:08:16.715 Executing: test_invalid_db_write_overflow_cq 00:08:16.715 Waiting for AER completion... 00:08:16.715 Failure: test_invalid_db_write_overflow_cq 00:08:16.715 00:08:16.715 09:25:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:16.715 09:25:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:16.715 [2024-11-29 09:25:44.378852] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76601) is not found. Dropping the request. 00:08:26.749 Executing: test_write_invalid_db 00:08:26.749 Waiting for AER completion... 00:08:26.749 Failure: test_write_invalid_db 00:08:26.749 00:08:26.749 Executing: test_invalid_db_write_overflow_sq 00:08:26.749 Waiting for AER completion... 00:08:26.749 Failure: test_invalid_db_write_overflow_sq 00:08:26.749 00:08:26.749 Executing: test_invalid_db_write_overflow_cq 00:08:26.749 Waiting for AER completion... 00:08:26.749 Failure: test_invalid_db_write_overflow_cq 00:08:26.749 00:08:26.749 ************************************ 00:08:26.749 END TEST nvme_doorbell_aers 00:08:26.749 ************************************ 00:08:26.749 00:08:26.749 real 0m40.200s 00:08:26.749 user 0m34.042s 00:08:26.749 sys 0m5.767s 00:08:26.749 09:25:54 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.749 09:25:54 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:26.749 09:25:54 nvme -- nvme/nvme.sh@97 -- # uname 00:08:26.749 09:25:54 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:26.749 09:25:54 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:26.749 09:25:54 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:26.749 09:25:54 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.749 09:25:54 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:26.749 ************************************ 00:08:26.749 START TEST nvme_multi_aen 00:08:26.749 ************************************ 00:08:26.749 09:25:54 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:26.749 [2024-11-29 09:25:54.417261] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76601) is not found. Dropping the request. 00:08:26.749 [2024-11-29 09:25:54.417880] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76601) is not found. Dropping the request. 00:08:26.749 [2024-11-29 09:25:54.418051] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76601) is not found. Dropping the request. 00:08:26.749 [2024-11-29 09:25:54.419412] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76601) is not found. Dropping the request. 00:08:26.749 [2024-11-29 09:25:54.419569] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76601) is not found. Dropping the request. 00:08:26.749 [2024-11-29 09:25:54.419779] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76601) is not found. Dropping the request. 00:08:26.749 [2024-11-29 09:25:54.420829] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76601) is not found. Dropping the request. 00:08:26.749 [2024-11-29 09:25:54.420952] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76601) is not found. Dropping the request. 00:08:26.749 [2024-11-29 09:25:54.420995] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76601) is not found. Dropping the request. 00:08:26.749 [2024-11-29 09:25:54.421945] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76601) is not found. Dropping the request. 00:08:26.749 [2024-11-29 09:25:54.422069] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76601) is not found. Dropping the request. 00:08:26.749 [2024-11-29 09:25:54.422135] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76601) is not found. Dropping the request. 00:08:26.749 Child process pid: 77127 00:08:27.010 [Child] Asynchronous Event Request test 00:08:27.010 [Child] Attached to 0000:00:10.0 00:08:27.010 [Child] Attached to 0000:00:11.0 00:08:27.010 [Child] Attached to 0000:00:13.0 00:08:27.010 [Child] Attached to 0000:00:12.0 00:08:27.010 [Child] Registering asynchronous event callbacks... 00:08:27.010 [Child] Getting orig temperature thresholds of all controllers 00:08:27.010 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:27.010 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:27.010 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:27.010 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:27.010 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:27.010 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:27.010 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:27.010 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:27.010 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:27.010 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:27.010 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:27.010 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:27.010 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:27.010 [Child] Cleaning up... 00:08:27.010 Asynchronous Event Request test 00:08:27.010 Attached to 0000:00:10.0 00:08:27.010 Attached to 0000:00:11.0 00:08:27.010 Attached to 0000:00:13.0 00:08:27.010 Attached to 0000:00:12.0 00:08:27.010 Reset controller to setup AER completions for this process 00:08:27.010 Registering asynchronous event callbacks... 00:08:27.010 Getting orig temperature thresholds of all controllers 00:08:27.010 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:27.010 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:27.010 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:27.010 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:27.010 Setting all controllers temperature threshold low to trigger AER 00:08:27.010 Waiting for all controllers temperature threshold to be set lower 00:08:27.010 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:27.010 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:27.010 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:27.010 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:27.010 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:27.010 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:27.010 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:27.010 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:27.010 Waiting for all controllers to trigger AER and reset threshold 00:08:27.010 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:27.010 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:27.010 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:27.010 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:27.010 Cleaning up... 00:08:27.010 00:08:27.010 real 0m0.414s 00:08:27.010 user 0m0.121s 00:08:27.010 sys 0m0.170s 00:08:27.010 09:25:54 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.010 09:25:54 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:27.010 ************************************ 00:08:27.010 END TEST nvme_multi_aen 00:08:27.010 ************************************ 00:08:27.010 09:25:54 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:27.010 09:25:54 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:27.010 09:25:54 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.010 09:25:54 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:27.010 ************************************ 00:08:27.010 START TEST nvme_startup 00:08:27.010 ************************************ 00:08:27.010 09:25:54 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:27.270 Initializing NVMe Controllers 00:08:27.270 Attached to 0000:00:10.0 00:08:27.270 Attached to 0000:00:11.0 00:08:27.270 Attached to 0000:00:13.0 00:08:27.270 Attached to 0000:00:12.0 00:08:27.270 Initialization complete. 00:08:27.270 Time used:139961.859 (us). 00:08:27.270 ************************************ 00:08:27.270 END TEST nvme_startup 00:08:27.270 ************************************ 00:08:27.270 00:08:27.270 real 0m0.201s 00:08:27.270 user 0m0.061s 00:08:27.270 sys 0m0.098s 00:08:27.270 09:25:54 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.270 09:25:54 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:27.270 09:25:54 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:27.270 09:25:54 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:27.270 09:25:54 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.270 09:25:54 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:27.270 ************************************ 00:08:27.270 START TEST nvme_multi_secondary 00:08:27.270 ************************************ 00:08:27.270 09:25:54 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:08:27.270 09:25:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=77178 00:08:27.270 09:25:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=77179 00:08:27.270 09:25:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:27.270 09:25:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:27.270 09:25:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:30.562 Initializing NVMe Controllers 00:08:30.562 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:30.562 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:30.562 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:30.562 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:30.562 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:30.562 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:30.562 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:30.562 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:30.562 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:30.562 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:30.562 Initialization complete. Launching workers. 00:08:30.562 ======================================================== 00:08:30.562 Latency(us) 00:08:30.562 Device Information : IOPS MiB/s Average min max 00:08:30.562 PCIE (0000:00:10.0) NSID 1 from core 1: 6103.56 23.84 2619.94 776.91 10420.37 00:08:30.562 PCIE (0000:00:11.0) NSID 1 from core 1: 6103.56 23.84 2621.34 804.38 11225.16 00:08:30.562 PCIE (0000:00:13.0) NSID 1 from core 1: 6101.89 23.84 2622.34 789.68 10504.62 00:08:30.562 PCIE (0000:00:12.0) NSID 1 from core 1: 6103.56 23.84 2622.21 773.89 10239.52 00:08:30.562 PCIE (0000:00:12.0) NSID 2 from core 1: 6103.56 23.84 2623.12 770.69 11080.77 00:08:30.562 PCIE (0000:00:12.0) NSID 3 from core 1: 6101.89 23.84 2624.28 795.47 10529.42 00:08:30.562 ======================================================== 00:08:30.562 Total : 36618.02 143.04 2622.21 770.69 11225.16 00:08:30.562 00:08:30.823 Initializing NVMe Controllers 00:08:30.823 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:30.823 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:30.823 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:30.823 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:30.824 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:30.824 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:30.824 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:30.824 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:30.824 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:30.824 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:30.824 Initialization complete. Launching workers. 00:08:30.824 ======================================================== 00:08:30.824 Latency(us) 00:08:30.824 Device Information : IOPS MiB/s Average min max 00:08:30.824 PCIE (0000:00:10.0) NSID 1 from core 2: 2503.20 9.78 6389.81 1578.96 20864.31 00:08:30.824 PCIE (0000:00:11.0) NSID 1 from core 2: 2503.20 9.78 6391.70 1508.92 24624.02 00:08:30.824 PCIE (0000:00:13.0) NSID 1 from core 2: 2503.20 9.78 6399.98 1379.07 20050.57 00:08:30.824 PCIE (0000:00:12.0) NSID 1 from core 2: 2503.20 9.78 6399.31 1548.68 18727.55 00:08:30.824 PCIE (0000:00:12.0) NSID 2 from core 2: 2503.20 9.78 6400.36 1239.85 22041.22 00:08:30.824 PCIE (0000:00:12.0) NSID 3 from core 2: 2503.20 9.78 6400.21 1375.23 21096.51 00:08:30.824 ======================================================== 00:08:30.824 Total : 15019.21 58.67 6396.89 1239.85 24624.02 00:08:30.824 00:08:30.824 09:25:58 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 77178 00:08:32.741 Initializing NVMe Controllers 00:08:32.741 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:32.741 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:32.741 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:32.741 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:32.741 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:32.741 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:32.741 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:32.741 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:32.741 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:32.741 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:32.741 Initialization complete. Launching workers. 00:08:32.741 ======================================================== 00:08:32.741 Latency(us) 00:08:32.741 Device Information : IOPS MiB/s Average min max 00:08:32.741 PCIE (0000:00:10.0) NSID 1 from core 0: 7647.51 29.87 2090.92 759.94 8649.55 00:08:32.741 PCIE (0000:00:11.0) NSID 1 from core 0: 7645.31 29.86 2092.47 760.26 8529.16 00:08:32.741 PCIE (0000:00:13.0) NSID 1 from core 0: 7644.51 29.86 2092.66 740.11 9235.89 00:08:32.741 PCIE (0000:00:12.0) NSID 1 from core 0: 7648.71 29.88 2091.48 749.80 9744.72 00:08:32.741 PCIE (0000:00:12.0) NSID 2 from core 0: 7647.31 29.87 2091.83 781.29 9553.77 00:08:32.741 PCIE (0000:00:12.0) NSID 3 from core 0: 7651.11 29.89 2090.75 646.73 8808.49 00:08:32.741 ======================================================== 00:08:32.741 Total : 45884.47 179.24 2091.68 646.73 9744.72 00:08:32.742 00:08:32.742 09:26:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 77179 00:08:32.742 09:26:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=77248 00:08:32.742 09:26:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=77249 00:08:32.742 09:26:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:32.742 09:26:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:32.742 09:26:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:36.042 Initializing NVMe Controllers 00:08:36.042 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:36.042 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:36.042 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:36.042 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:36.042 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:36.042 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:36.042 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:36.042 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:36.042 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:36.042 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:36.042 Initialization complete. Launching workers. 00:08:36.042 ======================================================== 00:08:36.042 Latency(us) 00:08:36.042 Device Information : IOPS MiB/s Average min max 00:08:36.042 PCIE (0000:00:10.0) NSID 1 from core 0: 4335.54 16.94 3688.77 835.51 9298.59 00:08:36.042 PCIE (0000:00:11.0) NSID 1 from core 0: 4335.54 16.94 3690.25 867.17 9702.63 00:08:36.042 PCIE (0000:00:13.0) NSID 1 from core 0: 4335.54 16.94 3690.17 858.70 10635.95 00:08:36.042 PCIE (0000:00:12.0) NSID 1 from core 0: 4335.54 16.94 3690.12 851.96 9481.95 00:08:36.042 PCIE (0000:00:12.0) NSID 2 from core 0: 4335.54 16.94 3691.63 870.36 10394.68 00:08:36.042 PCIE (0000:00:12.0) NSID 3 from core 0: 4335.54 16.94 3691.53 869.01 10367.79 00:08:36.042 ======================================================== 00:08:36.042 Total : 26013.22 101.61 3690.41 835.51 10635.95 00:08:36.042 00:08:36.042 Initializing NVMe Controllers 00:08:36.042 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:36.042 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:36.042 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:36.042 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:36.042 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:36.042 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:36.042 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:36.043 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:36.043 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:36.043 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:36.043 Initialization complete. Launching workers. 00:08:36.043 ======================================================== 00:08:36.043 Latency(us) 00:08:36.043 Device Information : IOPS MiB/s Average min max 00:08:36.043 PCIE (0000:00:10.0) NSID 1 from core 1: 4114.25 16.07 3887.17 1173.40 9713.42 00:08:36.043 PCIE (0000:00:11.0) NSID 1 from core 1: 4114.25 16.07 3888.30 1081.47 11151.90 00:08:36.043 PCIE (0000:00:13.0) NSID 1 from core 1: 4114.25 16.07 3888.14 1087.81 10392.09 00:08:36.043 PCIE (0000:00:12.0) NSID 1 from core 1: 4114.25 16.07 3887.98 1070.71 10347.85 00:08:36.043 PCIE (0000:00:12.0) NSID 2 from core 1: 4114.25 16.07 3887.82 1014.86 10131.94 00:08:36.043 PCIE (0000:00:12.0) NSID 3 from core 1: 4114.25 16.07 3887.65 517.49 11067.33 00:08:36.043 ======================================================== 00:08:36.043 Total : 24685.49 96.43 3887.84 517.49 11151.90 00:08:36.043 00:08:37.961 Initializing NVMe Controllers 00:08:37.961 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:37.961 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:37.961 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:37.961 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:37.961 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:37.961 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:37.961 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:37.961 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:37.961 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:37.961 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:37.961 Initialization complete. Launching workers. 00:08:37.961 ======================================================== 00:08:37.961 Latency(us) 00:08:37.961 Device Information : IOPS MiB/s Average min max 00:08:37.961 PCIE (0000:00:10.0) NSID 1 from core 2: 1998.07 7.80 8006.32 1638.57 27362.83 00:08:37.961 PCIE (0000:00:11.0) NSID 1 from core 2: 1998.07 7.80 8007.09 1695.27 20535.80 00:08:37.961 PCIE (0000:00:13.0) NSID 1 from core 2: 1998.07 7.80 8006.95 1298.93 21768.20 00:08:37.961 PCIE (0000:00:12.0) NSID 1 from core 2: 1998.07 7.80 8007.16 1692.89 20754.84 00:08:37.961 PCIE (0000:00:12.0) NSID 2 from core 2: 1998.07 7.80 8006.98 1500.58 21923.94 00:08:37.961 PCIE (0000:00:12.0) NSID 3 from core 2: 1998.07 7.80 8006.77 1158.31 21989.11 00:08:37.961 ======================================================== 00:08:37.961 Total : 11988.40 46.83 8006.88 1158.31 27362.83 00:08:37.961 00:08:37.961 ************************************ 00:08:37.961 END TEST nvme_multi_secondary 00:08:37.961 ************************************ 00:08:37.961 09:26:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 77248 00:08:37.961 09:26:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 77249 00:08:37.961 00:08:37.961 real 0m10.689s 00:08:37.961 user 0m18.345s 00:08:37.961 sys 0m0.676s 00:08:37.961 09:26:05 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.961 09:26:05 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:38.223 09:26:05 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:38.223 09:26:05 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:38.223 09:26:05 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/76193 ]] 00:08:38.223 09:26:05 nvme -- common/autotest_common.sh@1094 -- # kill 76193 00:08:38.223 09:26:05 nvme -- common/autotest_common.sh@1095 -- # wait 76193 00:08:38.223 [2024-11-29 09:26:05.690553] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77125) is not found. Dropping the request. 00:08:38.223 [2024-11-29 09:26:05.690639] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77125) is not found. Dropping the request. 00:08:38.223 [2024-11-29 09:26:05.690658] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77125) is not found. Dropping the request. 00:08:38.223 [2024-11-29 09:26:05.690669] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77125) is not found. Dropping the request. 00:08:38.223 [2024-11-29 09:26:05.691668] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77125) is not found. Dropping the request. 00:08:38.223 [2024-11-29 09:26:05.691716] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77125) is not found. Dropping the request. 00:08:38.223 [2024-11-29 09:26:05.691732] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77125) is not found. Dropping the request. 00:08:38.223 [2024-11-29 09:26:05.691743] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77125) is not found. Dropping the request. 00:08:38.223 [2024-11-29 09:26:05.692829] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77125) is not found. Dropping the request. 00:08:38.223 [2024-11-29 09:26:05.692884] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77125) is not found. Dropping the request. 00:08:38.223 [2024-11-29 09:26:05.692903] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77125) is not found. Dropping the request. 00:08:38.223 [2024-11-29 09:26:05.692916] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77125) is not found. Dropping the request. 00:08:38.223 [2024-11-29 09:26:05.694185] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77125) is not found. Dropping the request. 00:08:38.223 [2024-11-29 09:26:05.694242] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77125) is not found. Dropping the request. 00:08:38.223 [2024-11-29 09:26:05.694259] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77125) is not found. Dropping the request. 00:08:38.223 [2024-11-29 09:26:05.694272] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77125) is not found. Dropping the request. 00:08:38.223 09:26:05 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:08:38.223 09:26:05 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:08:38.223 09:26:05 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:38.223 09:26:05 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:38.223 09:26:05 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:38.223 09:26:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:38.223 ************************************ 00:08:38.223 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:38.223 ************************************ 00:08:38.223 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:38.223 * Looking for test storage... 00:08:38.223 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:38.223 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:38.223 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:08:38.224 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:38.224 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:38.224 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:38.224 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:38.224 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:38.224 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:38.224 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:38.224 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:38.224 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:38.224 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:38.224 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:38.224 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:38.224 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:38.224 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:38.224 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:38.224 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:38.224 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:38.224 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:38.224 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:38.224 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:38.224 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:38.224 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:38.224 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:38.224 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:38.224 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:38.224 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:38.224 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:38.224 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:38.224 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:38.224 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:38.224 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:38.224 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:38.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.224 --rc genhtml_branch_coverage=1 00:08:38.224 --rc genhtml_function_coverage=1 00:08:38.224 --rc genhtml_legend=1 00:08:38.224 --rc geninfo_all_blocks=1 00:08:38.224 --rc geninfo_unexecuted_blocks=1 00:08:38.224 00:08:38.224 ' 00:08:38.224 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:38.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.224 --rc genhtml_branch_coverage=1 00:08:38.224 --rc genhtml_function_coverage=1 00:08:38.224 --rc genhtml_legend=1 00:08:38.224 --rc geninfo_all_blocks=1 00:08:38.224 --rc geninfo_unexecuted_blocks=1 00:08:38.224 00:08:38.224 ' 00:08:38.224 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:38.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.224 --rc genhtml_branch_coverage=1 00:08:38.224 --rc genhtml_function_coverage=1 00:08:38.224 --rc genhtml_legend=1 00:08:38.224 --rc geninfo_all_blocks=1 00:08:38.224 --rc geninfo_unexecuted_blocks=1 00:08:38.224 00:08:38.224 ' 00:08:38.224 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:38.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.224 --rc genhtml_branch_coverage=1 00:08:38.224 --rc genhtml_function_coverage=1 00:08:38.224 --rc genhtml_legend=1 00:08:38.224 --rc geninfo_all_blocks=1 00:08:38.224 --rc geninfo_unexecuted_blocks=1 00:08:38.224 00:08:38.224 ' 00:08:38.224 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:38.224 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:38.224 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:38.224 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:38.224 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:38.224 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:38.224 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:08:38.224 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:08:38.224 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:08:38.224 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:08:38.224 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:38.224 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:08:38.224 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:38.224 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:38.224 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:38.485 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:38.485 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:38.485 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:08:38.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.485 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:38.485 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:38.485 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:38.485 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=77408 00:08:38.485 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:38.485 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 77408 00:08:38.485 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 77408 ']' 00:08:38.485 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.485 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:38.485 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.485 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:38.485 09:26:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:38.485 [2024-11-29 09:26:06.097324] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:38.485 [2024-11-29 09:26:06.097793] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77408 ] 00:08:38.747 [2024-11-29 09:26:06.272115] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:38.747 [2024-11-29 09:26:06.302110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:38.747 [2024-11-29 09:26:06.349058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:38.747 [2024-11-29 09:26:06.349404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:38.747 [2024-11-29 09:26:06.349782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.747 [2024-11-29 09:26:06.349848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:39.318 09:26:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:39.318 09:26:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:08:39.318 09:26:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:39.318 09:26:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.318 09:26:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:39.580 nvme0n1 00:08:39.580 09:26:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.580 09:26:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:39.580 09:26:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_pbYD6.txt 00:08:39.580 09:26:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:39.580 09:26:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.580 09:26:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:39.580 true 00:08:39.580 09:26:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.580 09:26:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:39.580 09:26:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732872367 00:08:39.580 09:26:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=77431 00:08:39.580 09:26:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:39.580 09:26:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:08:39.580 09:26:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:08:41.596 09:26:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:08:41.596 09:26:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.596 09:26:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:41.596 [2024-11-29 09:26:09.081524] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:41.596 [2024-11-29 09:26:09.081806] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:08:41.596 [2024-11-29 09:26:09.081829] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:08:41.596 [2024-11-29 09:26:09.081854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.596 [2024-11-29 09:26:09.085146] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:41.596 09:26:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.596 09:26:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 77431 00:08:41.596 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 77431 00:08:41.596 09:26:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 77431 00:08:41.596 09:26:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:08:41.596 09:26:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:08:41.596 09:26:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:08:41.596 09:26:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.596 09:26:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:41.596 09:26:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.596 09:26:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:08:41.596 09:26:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_pbYD6.txt 00:08:41.596 09:26:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:08:41.596 09:26:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:08:41.596 09:26:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:41.596 09:26:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:41.596 09:26:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:41.596 09:26:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:41.596 09:26:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:41.596 09:26:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:41.596 09:26:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:08:41.596 09:26:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:08:41.596 09:26:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:08:41.596 09:26:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:41.596 09:26:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:41.596 09:26:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:41.596 09:26:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:41.596 09:26:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:41.596 09:26:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:41.596 09:26:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:08:41.596 09:26:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:08:41.596 09:26:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_pbYD6.txt 00:08:41.596 09:26:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 77408 00:08:41.596 09:26:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 77408 ']' 00:08:41.596 09:26:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 77408 00:08:41.596 09:26:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:08:41.596 09:26:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:41.596 09:26:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77408 00:08:41.596 killing process with pid 77408 00:08:41.596 09:26:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:41.596 09:26:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:41.596 09:26:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77408' 00:08:41.596 09:26:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 77408 00:08:41.596 09:26:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 77408 00:08:41.857 09:26:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:08:41.857 09:26:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:08:41.857 00:08:41.857 real 0m3.736s 00:08:41.857 user 0m12.896s 00:08:41.857 sys 0m0.710s 00:08:41.857 ************************************ 00:08:41.857 END TEST bdev_nvme_reset_stuck_adm_cmd 00:08:41.857 ************************************ 00:08:41.857 09:26:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.857 09:26:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:41.857 09:26:09 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:08:41.857 09:26:09 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:08:42.117 09:26:09 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:42.117 09:26:09 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.117 09:26:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:42.117 ************************************ 00:08:42.117 START TEST nvme_fio 00:08:42.117 ************************************ 00:08:42.117 09:26:09 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:08:42.117 09:26:09 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:08:42.117 09:26:09 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:08:42.117 09:26:09 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:08:42.117 09:26:09 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:42.117 09:26:09 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:08:42.117 09:26:09 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:42.117 09:26:09 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:42.117 09:26:09 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:42.117 09:26:09 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:42.117 09:26:09 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:42.117 09:26:09 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:08:42.117 09:26:09 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:08:42.117 09:26:09 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:42.117 09:26:09 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:42.118 09:26:09 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:42.378 09:26:09 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:42.379 09:26:09 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:42.640 09:26:10 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:42.640 09:26:10 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:42.640 09:26:10 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:42.640 09:26:10 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:42.640 09:26:10 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:42.640 09:26:10 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:42.640 09:26:10 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:42.641 09:26:10 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:42.641 09:26:10 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:42.641 09:26:10 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:42.641 09:26:10 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:42.641 09:26:10 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:42.641 09:26:10 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:42.641 09:26:10 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:42.641 09:26:10 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:42.641 09:26:10 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:42.641 09:26:10 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:42.641 09:26:10 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:42.902 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:42.902 fio-3.35 00:08:42.902 Starting 1 thread 00:08:49.455 00:08:49.455 test: (groupid=0, jobs=1): err= 0: pid=77560: Fri Nov 29 09:26:16 2024 00:08:49.455 read: IOPS=22.0k, BW=86.0MiB/s (90.2MB/s)(172MiB/2001msec) 00:08:49.455 slat (usec): min=4, max=102, avg= 5.20, stdev= 2.45 00:08:49.455 clat (usec): min=779, max=10027, avg=2905.74, stdev=861.06 00:08:49.455 lat (usec): min=799, max=10079, avg=2910.94, stdev=862.43 00:08:49.455 clat percentiles (usec): 00:08:49.455 | 1.00th=[ 2040], 5.00th=[ 2278], 10.00th=[ 2376], 20.00th=[ 2474], 00:08:49.455 | 30.00th=[ 2540], 40.00th=[ 2573], 50.00th=[ 2606], 60.00th=[ 2671], 00:08:49.455 | 70.00th=[ 2769], 80.00th=[ 3032], 90.00th=[ 3982], 95.00th=[ 5014], 00:08:49.455 | 99.00th=[ 6325], 99.50th=[ 6587], 99.90th=[ 7242], 99.95th=[ 7898], 00:08:49.455 | 99.99th=[ 9896] 00:08:49.455 bw ( KiB/s): min=82802, max=88280, per=96.95%, avg=85408.67, stdev=2748.57, samples=3 00:08:49.455 iops : min=20700, max=22070, avg=21352.00, stdev=687.38, samples=3 00:08:49.455 write: IOPS=21.9k, BW=85.5MiB/s (89.6MB/s)(171MiB/2001msec); 0 zone resets 00:08:49.455 slat (usec): min=4, max=101, avg= 5.48, stdev= 2.52 00:08:49.455 clat (usec): min=682, max=9954, avg=2906.02, stdev=860.08 00:08:49.455 lat (usec): min=702, max=9969, avg=2911.50, stdev=861.45 00:08:49.455 clat percentiles (usec): 00:08:49.455 | 1.00th=[ 1991], 5.00th=[ 2278], 10.00th=[ 2376], 20.00th=[ 2474], 00:08:49.455 | 30.00th=[ 2540], 40.00th=[ 2573], 50.00th=[ 2606], 60.00th=[ 2671], 00:08:49.455 | 70.00th=[ 2737], 80.00th=[ 2999], 90.00th=[ 3949], 95.00th=[ 5014], 00:08:49.455 | 99.00th=[ 6325], 99.50th=[ 6521], 99.90th=[ 7373], 99.95th=[ 8029], 00:08:49.455 | 99.99th=[ 9765] 00:08:49.455 bw ( KiB/s): min=82810, max=88800, per=97.75%, avg=85560.67, stdev=3024.75, samples=3 00:08:49.455 iops : min=20702, max=22200, avg=21390.00, stdev=756.42, samples=3 00:08:49.455 lat (usec) : 750=0.01%, 1000=0.01% 00:08:49.455 lat (msec) : 2=0.91%, 4=89.29%, 10=9.80%, 20=0.01% 00:08:49.455 cpu : usr=99.15%, sys=0.05%, ctx=6, majf=0, minf=623 00:08:49.455 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:49.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:49.455 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:49.455 issued rwts: total=44071,43786,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:49.455 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:49.455 00:08:49.455 Run status group 0 (all jobs): 00:08:49.455 READ: bw=86.0MiB/s (90.2MB/s), 86.0MiB/s-86.0MiB/s (90.2MB/s-90.2MB/s), io=172MiB (181MB), run=2001-2001msec 00:08:49.456 WRITE: bw=85.5MiB/s (89.6MB/s), 85.5MiB/s-85.5MiB/s (89.6MB/s-89.6MB/s), io=171MiB (179MB), run=2001-2001msec 00:08:49.456 ----------------------------------------------------- 00:08:49.456 Suppressions used: 00:08:49.456 count bytes template 00:08:49.456 1 32 /usr/src/fio/parse.c 00:08:49.456 1 8 libtcmalloc_minimal.so 00:08:49.456 ----------------------------------------------------- 00:08:49.456 00:08:49.456 09:26:16 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:49.456 09:26:16 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:49.456 09:26:16 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:49.456 09:26:16 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:49.456 09:26:16 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:49.456 09:26:16 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:49.456 09:26:16 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:49.456 09:26:16 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:49.456 09:26:16 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:49.456 09:26:16 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:49.456 09:26:16 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:49.456 09:26:16 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:49.456 09:26:16 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:49.456 09:26:16 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:49.456 09:26:16 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:49.456 09:26:16 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:49.456 09:26:16 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:49.456 09:26:16 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:49.456 09:26:16 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:49.456 09:26:16 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:49.456 09:26:16 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:49.456 09:26:16 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:49.456 09:26:16 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:49.456 09:26:16 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:49.456 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:49.456 fio-3.35 00:08:49.456 Starting 1 thread 00:08:54.720 00:08:54.720 test: (groupid=0, jobs=1): err= 0: pid=77617: Fri Nov 29 09:26:21 2024 00:08:54.720 read: IOPS=20.4k, BW=79.8MiB/s (83.7MB/s)(160MiB/2004msec) 00:08:54.720 slat (nsec): min=3409, max=53641, avg=4978.31, stdev=2074.49 00:08:54.720 clat (usec): min=598, max=7883, avg=2224.09, stdev=891.83 00:08:54.720 lat (usec): min=602, max=7888, avg=2229.07, stdev=892.78 00:08:54.720 clat percentiles (usec): 00:08:54.720 | 1.00th=[ 1090], 5.00th=[ 1221], 10.00th=[ 1303], 20.00th=[ 1450], 00:08:54.720 | 30.00th=[ 1647], 40.00th=[ 1942], 50.00th=[ 2245], 60.00th=[ 2409], 00:08:54.720 | 70.00th=[ 2474], 80.00th=[ 2606], 90.00th=[ 2999], 95.00th=[ 3687], 00:08:54.720 | 99.00th=[ 6128], 99.50th=[ 6587], 99.90th=[ 7635], 99.95th=[ 7701], 00:08:54.720 | 99.99th=[ 7832] 00:08:54.720 bw ( KiB/s): min=73736, max=92992, per=100.00%, avg=81862.00, stdev=8622.80, samples=4 00:08:54.720 iops : min=18434, max=23248, avg=20465.50, stdev=2155.70, samples=4 00:08:54.720 write: IOPS=20.4k, BW=79.7MiB/s (83.5MB/s)(160MiB/2004msec); 0 zone resets 00:08:54.720 slat (nsec): min=3538, max=57017, avg=5266.93, stdev=2020.11 00:08:54.720 clat (usec): min=750, max=20845, avg=4032.69, stdev=3237.24 00:08:54.720 lat (usec): min=755, max=20849, avg=4037.96, stdev=3237.42 00:08:54.720 clat percentiles (usec): 00:08:54.720 | 1.00th=[ 1172], 5.00th=[ 1336], 10.00th=[ 1500], 20.00th=[ 1893], 00:08:54.720 | 30.00th=[ 2311], 40.00th=[ 2442], 50.00th=[ 2540], 60.00th=[ 2737], 00:08:54.720 | 70.00th=[ 3687], 80.00th=[ 6390], 90.00th=[ 9372], 95.00th=[11076], 00:08:54.720 | 99.00th=[14615], 99.50th=[15664], 99.90th=[18220], 99.95th=[19530], 00:08:54.720 | 99.99th=[20317] 00:08:54.720 bw ( KiB/s): min=74440, max=92680, per=99.97%, avg=81538.00, stdev=8440.97, samples=4 00:08:54.720 iops : min=18610, max=23170, avg=20384.50, stdev=2110.24, samples=4 00:08:54.720 lat (usec) : 750=0.02%, 1000=0.18% 00:08:54.720 lat (msec) : 2=31.52%, 4=52.13%, 10=12.38%, 20=3.76%, 50=0.01% 00:08:54.720 cpu : usr=99.25%, sys=0.05%, ctx=14, majf=0, minf=623 00:08:54.720 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:54.720 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:54.720 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:54.720 issued rwts: total=40964,40863,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:54.720 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:54.720 00:08:54.720 Run status group 0 (all jobs): 00:08:54.720 READ: bw=79.8MiB/s (83.7MB/s), 79.8MiB/s-79.8MiB/s (83.7MB/s-83.7MB/s), io=160MiB (168MB), run=2004-2004msec 00:08:54.720 WRITE: bw=79.7MiB/s (83.5MB/s), 79.7MiB/s-79.7MiB/s (83.5MB/s-83.5MB/s), io=160MiB (167MB), run=2004-2004msec 00:08:54.720 ----------------------------------------------------- 00:08:54.720 Suppressions used: 00:08:54.720 count bytes template 00:08:54.720 1 32 /usr/src/fio/parse.c 00:08:54.720 1 8 libtcmalloc_minimal.so 00:08:54.720 ----------------------------------------------------- 00:08:54.720 00:08:54.720 09:26:21 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:54.720 09:26:21 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:54.720 09:26:21 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:54.720 09:26:21 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:54.720 09:26:21 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:54.720 09:26:21 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:54.720 09:26:22 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:54.720 09:26:22 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:54.720 09:26:22 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:54.720 09:26:22 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:54.720 09:26:22 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:54.720 09:26:22 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:54.720 09:26:22 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:54.720 09:26:22 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:54.720 09:26:22 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:54.720 09:26:22 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:54.720 09:26:22 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:54.720 09:26:22 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:54.720 09:26:22 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:54.720 09:26:22 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:54.720 09:26:22 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:54.720 09:26:22 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:54.720 09:26:22 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:54.720 09:26:22 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:54.720 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:54.720 fio-3.35 00:08:54.721 Starting 1 thread 00:09:01.295 00:09:01.295 test: (groupid=0, jobs=1): err= 0: pid=77677: Fri Nov 29 09:26:28 2024 00:09:01.295 read: IOPS=21.9k, BW=85.6MiB/s (89.8MB/s)(171MiB/2001msec) 00:09:01.295 slat (nsec): min=4205, max=92144, avg=5180.75, stdev=2328.14 00:09:01.295 clat (usec): min=209, max=9698, avg=2921.37, stdev=846.22 00:09:01.295 lat (usec): min=213, max=9791, avg=2926.55, stdev=847.57 00:09:01.295 clat percentiles (usec): 00:09:01.295 | 1.00th=[ 1958], 5.00th=[ 2376], 10.00th=[ 2442], 20.00th=[ 2507], 00:09:01.295 | 30.00th=[ 2540], 40.00th=[ 2573], 50.00th=[ 2638], 60.00th=[ 2671], 00:09:01.295 | 70.00th=[ 2769], 80.00th=[ 2999], 90.00th=[ 3949], 95.00th=[ 5014], 00:09:01.295 | 99.00th=[ 6194], 99.50th=[ 6521], 99.90th=[ 7439], 99.95th=[ 7898], 00:09:01.295 | 99.99th=[ 9503] 00:09:01.295 bw ( KiB/s): min=85320, max=91024, per=100.00%, avg=87717.33, stdev=2958.73, samples=3 00:09:01.295 iops : min=21330, max=22756, avg=21929.33, stdev=739.68, samples=3 00:09:01.295 write: IOPS=21.8k, BW=85.1MiB/s (89.2MB/s)(170MiB/2001msec); 0 zone resets 00:09:01.295 slat (usec): min=4, max=138, avg= 5.45, stdev= 2.45 00:09:01.295 clat (usec): min=200, max=9563, avg=2917.27, stdev=840.97 00:09:01.295 lat (usec): min=205, max=9572, avg=2922.73, stdev=842.32 00:09:01.295 clat percentiles (usec): 00:09:01.295 | 1.00th=[ 1942], 5.00th=[ 2376], 10.00th=[ 2442], 20.00th=[ 2507], 00:09:01.295 | 30.00th=[ 2540], 40.00th=[ 2606], 50.00th=[ 2638], 60.00th=[ 2671], 00:09:01.295 | 70.00th=[ 2737], 80.00th=[ 2999], 90.00th=[ 3916], 95.00th=[ 5014], 00:09:01.295 | 99.00th=[ 6259], 99.50th=[ 6521], 99.90th=[ 7570], 99.95th=[ 7963], 00:09:01.295 | 99.99th=[ 9372] 00:09:01.295 bw ( KiB/s): min=85072, max=91784, per=100.00%, avg=87877.33, stdev=3488.90, samples=3 00:09:01.295 iops : min=21268, max=22946, avg=21969.33, stdev=872.23, samples=3 00:09:01.295 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:09:01.295 lat (msec) : 2=1.12%, 4=89.22%, 10=9.61% 00:09:01.295 cpu : usr=99.30%, sys=0.00%, ctx=4, majf=0, minf=625 00:09:01.295 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:01.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.295 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:01.295 issued rwts: total=43866,43576,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:01.295 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:01.295 00:09:01.295 Run status group 0 (all jobs): 00:09:01.295 READ: bw=85.6MiB/s (89.8MB/s), 85.6MiB/s-85.6MiB/s (89.8MB/s-89.8MB/s), io=171MiB (180MB), run=2001-2001msec 00:09:01.296 WRITE: bw=85.1MiB/s (89.2MB/s), 85.1MiB/s-85.1MiB/s (89.2MB/s-89.2MB/s), io=170MiB (178MB), run=2001-2001msec 00:09:01.296 ----------------------------------------------------- 00:09:01.296 Suppressions used: 00:09:01.296 count bytes template 00:09:01.296 1 32 /usr/src/fio/parse.c 00:09:01.296 1 8 libtcmalloc_minimal.so 00:09:01.296 ----------------------------------------------------- 00:09:01.296 00:09:01.296 09:26:28 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:01.296 09:26:28 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:01.296 09:26:28 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:01.296 09:26:28 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:01.296 09:26:29 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:01.296 09:26:29 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:01.553 09:26:29 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:01.553 09:26:29 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:01.553 09:26:29 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:01.554 09:26:29 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:01.554 09:26:29 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:01.554 09:26:29 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:01.554 09:26:29 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:01.554 09:26:29 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:01.554 09:26:29 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:01.554 09:26:29 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:01.554 09:26:29 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:01.554 09:26:29 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:01.554 09:26:29 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:01.554 09:26:29 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:01.554 09:26:29 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:01.554 09:26:29 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:01.554 09:26:29 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:01.554 09:26:29 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:01.812 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:01.812 fio-3.35 00:09:01.812 Starting 1 thread 00:09:07.074 00:09:07.074 test: (groupid=0, jobs=1): err= 0: pid=77738: Fri Nov 29 09:26:34 2024 00:09:07.074 read: IOPS=23.5k, BW=91.9MiB/s (96.3MB/s)(184MiB/2001msec) 00:09:07.074 slat (nsec): min=4230, max=74904, avg=5040.88, stdev=2239.25 00:09:07.074 clat (usec): min=273, max=12617, avg=2722.42, stdev=814.93 00:09:07.074 lat (usec): min=278, max=12670, avg=2727.46, stdev=816.36 00:09:07.074 clat percentiles (usec): 00:09:07.074 | 1.00th=[ 1876], 5.00th=[ 2278], 10.00th=[ 2343], 20.00th=[ 2409], 00:09:07.074 | 30.00th=[ 2442], 40.00th=[ 2442], 50.00th=[ 2474], 60.00th=[ 2507], 00:09:07.074 | 70.00th=[ 2573], 80.00th=[ 2671], 90.00th=[ 3294], 95.00th=[ 4752], 00:09:07.074 | 99.00th=[ 6194], 99.50th=[ 6652], 99.90th=[ 7963], 99.95th=[ 9372], 00:09:07.074 | 99.99th=[12387] 00:09:07.074 bw ( KiB/s): min=88592, max=93792, per=97.76%, avg=91946.67, stdev=2910.08, samples=3 00:09:07.074 iops : min=22148, max=23448, avg=22986.67, stdev=727.52, samples=3 00:09:07.074 write: IOPS=23.4k, BW=91.2MiB/s (95.7MB/s)(183MiB/2001msec); 0 zone resets 00:09:07.074 slat (usec): min=4, max=104, avg= 5.31, stdev= 2.21 00:09:07.074 clat (usec): min=217, max=12439, avg=2720.76, stdev=825.44 00:09:07.074 lat (usec): min=222, max=12452, avg=2726.08, stdev=826.84 00:09:07.074 clat percentiles (usec): 00:09:07.074 | 1.00th=[ 1876], 5.00th=[ 2278], 10.00th=[ 2343], 20.00th=[ 2409], 00:09:07.074 | 30.00th=[ 2442], 40.00th=[ 2442], 50.00th=[ 2474], 60.00th=[ 2507], 00:09:07.074 | 70.00th=[ 2540], 80.00th=[ 2638], 90.00th=[ 3261], 95.00th=[ 4817], 00:09:07.074 | 99.00th=[ 6259], 99.50th=[ 6652], 99.90th=[ 8094], 99.95th=[ 9896], 00:09:07.074 | 99.99th=[12125] 00:09:07.074 bw ( KiB/s): min=88016, max=95208, per=98.53%, avg=92050.67, stdev=3675.39, samples=3 00:09:07.074 iops : min=22004, max=23802, avg=23012.67, stdev=918.85, samples=3 00:09:07.074 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:09:07.074 lat (msec) : 2=1.43%, 4=91.61%, 10=6.85%, 20=0.05% 00:09:07.074 cpu : usr=99.25%, sys=0.05%, ctx=5, majf=0, minf=624 00:09:07.074 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:07.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.074 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:07.074 issued rwts: total=47052,46735,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:07.074 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:07.074 00:09:07.074 Run status group 0 (all jobs): 00:09:07.074 READ: bw=91.9MiB/s (96.3MB/s), 91.9MiB/s-91.9MiB/s (96.3MB/s-96.3MB/s), io=184MiB (193MB), run=2001-2001msec 00:09:07.074 WRITE: bw=91.2MiB/s (95.7MB/s), 91.2MiB/s-91.2MiB/s (95.7MB/s-95.7MB/s), io=183MiB (191MB), run=2001-2001msec 00:09:07.332 ----------------------------------------------------- 00:09:07.332 Suppressions used: 00:09:07.332 count bytes template 00:09:07.332 1 32 /usr/src/fio/parse.c 00:09:07.332 1 8 libtcmalloc_minimal.so 00:09:07.332 ----------------------------------------------------- 00:09:07.332 00:09:07.332 09:26:34 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:07.332 ************************************ 00:09:07.332 END TEST nvme_fio 00:09:07.332 ************************************ 00:09:07.332 09:26:34 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:07.332 00:09:07.332 real 0m25.286s 00:09:07.332 user 0m17.415s 00:09:07.332 sys 0m12.972s 00:09:07.332 09:26:34 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:07.332 09:26:34 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:07.332 ************************************ 00:09:07.332 END TEST nvme 00:09:07.332 ************************************ 00:09:07.332 00:09:07.332 real 1m34.367s 00:09:07.332 user 3m35.078s 00:09:07.332 sys 0m24.005s 00:09:07.332 09:26:34 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:07.332 09:26:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:07.332 09:26:34 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:07.332 09:26:34 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:07.332 09:26:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:07.332 09:26:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:07.332 09:26:34 -- common/autotest_common.sh@10 -- # set +x 00:09:07.332 ************************************ 00:09:07.332 START TEST nvme_scc 00:09:07.332 ************************************ 00:09:07.332 09:26:34 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:07.332 * Looking for test storage... 00:09:07.332 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:07.332 09:26:35 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:07.332 09:26:35 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:07.332 09:26:35 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:07.590 09:26:35 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:07.590 09:26:35 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:07.590 09:26:35 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:07.590 09:26:35 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:07.590 09:26:35 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:07.590 09:26:35 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:07.590 09:26:35 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:07.590 09:26:35 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:07.590 09:26:35 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:07.590 09:26:35 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:07.590 09:26:35 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:07.590 09:26:35 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:07.590 09:26:35 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:07.590 09:26:35 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:07.590 09:26:35 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:07.590 09:26:35 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:07.590 09:26:35 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:07.590 09:26:35 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:07.590 09:26:35 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:07.590 09:26:35 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:07.590 09:26:35 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:07.590 09:26:35 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:07.590 09:26:35 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:07.590 09:26:35 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:07.590 09:26:35 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:07.590 09:26:35 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:07.590 09:26:35 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:07.590 09:26:35 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:07.590 09:26:35 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:07.590 09:26:35 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:07.590 09:26:35 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:07.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.590 --rc genhtml_branch_coverage=1 00:09:07.590 --rc genhtml_function_coverage=1 00:09:07.590 --rc genhtml_legend=1 00:09:07.590 --rc geninfo_all_blocks=1 00:09:07.590 --rc geninfo_unexecuted_blocks=1 00:09:07.590 00:09:07.590 ' 00:09:07.590 09:26:35 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:07.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.590 --rc genhtml_branch_coverage=1 00:09:07.590 --rc genhtml_function_coverage=1 00:09:07.590 --rc genhtml_legend=1 00:09:07.590 --rc geninfo_all_blocks=1 00:09:07.590 --rc geninfo_unexecuted_blocks=1 00:09:07.590 00:09:07.590 ' 00:09:07.590 09:26:35 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:07.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.590 --rc genhtml_branch_coverage=1 00:09:07.590 --rc genhtml_function_coverage=1 00:09:07.590 --rc genhtml_legend=1 00:09:07.590 --rc geninfo_all_blocks=1 00:09:07.590 --rc geninfo_unexecuted_blocks=1 00:09:07.590 00:09:07.590 ' 00:09:07.590 09:26:35 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:07.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.591 --rc genhtml_branch_coverage=1 00:09:07.591 --rc genhtml_function_coverage=1 00:09:07.591 --rc genhtml_legend=1 00:09:07.591 --rc geninfo_all_blocks=1 00:09:07.591 --rc geninfo_unexecuted_blocks=1 00:09:07.591 00:09:07.591 ' 00:09:07.591 09:26:35 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:07.591 09:26:35 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:07.591 09:26:35 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:07.591 09:26:35 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:07.591 09:26:35 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:07.591 09:26:35 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:07.591 09:26:35 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:07.591 09:26:35 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:07.591 09:26:35 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:07.591 09:26:35 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.591 09:26:35 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.591 09:26:35 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.591 09:26:35 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:07.591 09:26:35 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.591 09:26:35 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:07.591 09:26:35 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:07.591 09:26:35 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:07.591 09:26:35 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:07.591 09:26:35 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:07.591 09:26:35 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:07.591 09:26:35 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:07.591 09:26:35 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:07.591 09:26:35 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:07.591 09:26:35 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:07.591 09:26:35 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:07.591 09:26:35 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:07.591 09:26:35 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:07.591 09:26:35 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:07.848 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:07.848 Waiting for block devices as requested 00:09:07.848 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:08.106 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:08.106 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:08.106 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:13.418 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:13.418 09:26:40 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:13.418 09:26:40 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:13.418 09:26:40 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:13.418 09:26:40 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:13.418 09:26:40 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:13.418 09:26:40 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:13.418 09:26:40 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:13.418 09:26:40 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:13.418 09:26:40 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:13.418 09:26:40 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:13.418 09:26:40 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:13.418 09:26:40 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:13.418 09:26:40 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:13.418 09:26:40 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:13.418 09:26:40 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:13.418 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.418 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.418 09:26:40 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:13.418 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:13.418 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.418 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.418 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:13.418 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:13.418 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:13.418 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.418 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.418 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:13.418 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:13.418 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:13.418 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.418 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.418 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:13.418 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:13.418 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:13.418 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.418 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.418 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:13.418 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:13.418 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:13.418 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.418 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.418 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:13.418 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:13.418 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:13.418 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.418 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.418 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:13.418 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:13.418 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:13.418 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.418 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.418 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:13.418 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:13.418 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:13.418 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.418 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.418 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.418 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:13.418 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:13.419 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:13.420 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.421 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:13.422 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.423 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.424 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:13.425 09:26:40 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:13.425 09:26:40 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:13.425 09:26:40 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:13.425 09:26:40 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:13.425 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.426 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:13.427 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.428 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.429 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.430 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:13.431 09:26:40 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:13.431 09:26:40 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:13.431 09:26:40 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:13.431 09:26:40 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.431 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.432 09:26:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.432 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:13.432 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:13.432 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.432 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.432 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.432 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:13.432 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:13.432 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.432 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.432 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.432 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.433 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.434 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:13.435 09:26:41 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.436 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.437 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.438 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.439 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:13.440 09:26:41 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:13.705 09:26:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:13.705 09:26:41 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:13.705 09:26:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:13.705 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.705 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.705 09:26:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:13.705 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:13.705 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.705 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.705 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:13.705 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:13.705 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:13.705 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.705 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.705 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:13.705 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:13.705 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:13.705 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.705 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.705 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:13.705 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:13.705 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:13.705 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.705 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:13.706 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:13.707 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:13.708 09:26:41 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:13.708 09:26:41 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:13.708 09:26:41 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:13.708 09:26:41 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:13.708 09:26:41 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:13.709 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.710 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.711 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:13.712 09:26:41 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:13.712 09:26:41 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:13.712 09:26:41 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:13.712 09:26:41 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:13.712 09:26:41 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:13.973 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:14.544 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:14.544 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:14.544 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:14.544 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:14.804 09:26:42 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:14.804 09:26:42 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:14.804 09:26:42 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:14.804 09:26:42 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:14.804 ************************************ 00:09:14.804 START TEST nvme_simple_copy 00:09:14.804 ************************************ 00:09:14.804 09:26:42 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:15.066 Initializing NVMe Controllers 00:09:15.066 Attaching to 0000:00:10.0 00:09:15.066 Controller supports SCC. Attached to 0000:00:10.0 00:09:15.066 Namespace ID: 1 size: 6GB 00:09:15.066 Initialization complete. 00:09:15.066 00:09:15.066 Controller QEMU NVMe Ctrl (12340 ) 00:09:15.066 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:15.066 Namespace Block Size:4096 00:09:15.066 Writing LBAs 0 to 63 with Random Data 00:09:15.066 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:15.066 LBAs matching Written Data: 64 00:09:15.066 00:09:15.066 real 0m0.263s 00:09:15.066 user 0m0.097s 00:09:15.066 sys 0m0.064s 00:09:15.066 09:26:42 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:15.066 09:26:42 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:15.066 ************************************ 00:09:15.066 END TEST nvme_simple_copy 00:09:15.066 ************************************ 00:09:15.066 00:09:15.066 real 0m7.693s 00:09:15.066 user 0m1.106s 00:09:15.066 sys 0m1.262s 00:09:15.066 09:26:42 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:15.066 09:26:42 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:15.066 ************************************ 00:09:15.066 END TEST nvme_scc 00:09:15.066 ************************************ 00:09:15.066 09:26:42 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:15.066 09:26:42 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:15.066 09:26:42 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:15.066 09:26:42 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:15.066 09:26:42 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:15.066 09:26:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:15.066 09:26:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:15.066 09:26:42 -- common/autotest_common.sh@10 -- # set +x 00:09:15.066 ************************************ 00:09:15.066 START TEST nvme_fdp 00:09:15.066 ************************************ 00:09:15.066 09:26:42 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:09:15.066 * Looking for test storage... 00:09:15.066 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:15.066 09:26:42 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:15.066 09:26:42 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:09:15.066 09:26:42 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:15.328 09:26:42 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:15.328 09:26:42 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:15.328 09:26:42 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:15.328 09:26:42 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:15.328 09:26:42 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:15.328 09:26:42 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:15.328 09:26:42 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:15.328 09:26:42 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:15.328 09:26:42 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:15.328 09:26:42 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:15.328 09:26:42 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:15.328 09:26:42 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:15.328 09:26:42 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:15.328 09:26:42 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:15.328 09:26:42 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:15.328 09:26:42 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:15.328 09:26:42 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:15.328 09:26:42 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:15.328 09:26:42 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:15.328 09:26:42 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:15.328 09:26:42 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:15.328 09:26:42 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:15.328 09:26:42 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:15.328 09:26:42 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:15.328 09:26:42 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:15.328 09:26:42 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:15.328 09:26:42 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:15.328 09:26:42 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:15.328 09:26:42 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:15.328 09:26:42 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:15.328 09:26:42 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:15.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.328 --rc genhtml_branch_coverage=1 00:09:15.328 --rc genhtml_function_coverage=1 00:09:15.328 --rc genhtml_legend=1 00:09:15.328 --rc geninfo_all_blocks=1 00:09:15.328 --rc geninfo_unexecuted_blocks=1 00:09:15.328 00:09:15.328 ' 00:09:15.328 09:26:42 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:15.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.328 --rc genhtml_branch_coverage=1 00:09:15.328 --rc genhtml_function_coverage=1 00:09:15.328 --rc genhtml_legend=1 00:09:15.328 --rc geninfo_all_blocks=1 00:09:15.328 --rc geninfo_unexecuted_blocks=1 00:09:15.328 00:09:15.328 ' 00:09:15.328 09:26:42 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:15.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.328 --rc genhtml_branch_coverage=1 00:09:15.328 --rc genhtml_function_coverage=1 00:09:15.328 --rc genhtml_legend=1 00:09:15.328 --rc geninfo_all_blocks=1 00:09:15.328 --rc geninfo_unexecuted_blocks=1 00:09:15.328 00:09:15.328 ' 00:09:15.328 09:26:42 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:15.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.328 --rc genhtml_branch_coverage=1 00:09:15.328 --rc genhtml_function_coverage=1 00:09:15.328 --rc genhtml_legend=1 00:09:15.328 --rc geninfo_all_blocks=1 00:09:15.328 --rc geninfo_unexecuted_blocks=1 00:09:15.328 00:09:15.328 ' 00:09:15.328 09:26:42 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:15.328 09:26:42 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:15.328 09:26:42 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:15.328 09:26:42 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:15.328 09:26:42 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:15.328 09:26:42 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:15.328 09:26:42 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:15.328 09:26:42 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:15.328 09:26:42 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:15.328 09:26:42 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.328 09:26:42 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.328 09:26:42 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.328 09:26:42 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:15.328 09:26:42 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.329 09:26:42 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:15.329 09:26:42 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:15.329 09:26:42 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:15.329 09:26:42 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:15.329 09:26:42 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:15.329 09:26:42 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:15.329 09:26:42 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:15.329 09:26:42 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:15.329 09:26:42 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:15.329 09:26:42 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:15.329 09:26:42 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:15.590 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:15.590 Waiting for block devices as requested 00:09:15.591 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:15.852 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:15.852 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:15.852 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:21.156 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:21.156 09:26:48 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:21.156 09:26:48 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:21.156 09:26:48 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:21.157 09:26:48 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:21.157 09:26:48 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:21.157 09:26:48 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:21.157 09:26:48 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.157 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.158 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.159 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.160 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.161 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:21.162 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:21.163 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.164 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:21.165 09:26:48 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:21.166 09:26:48 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:21.166 09:26:48 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:21.166 09:26:48 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:21.166 09:26:48 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:21.166 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:21.167 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.168 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.169 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:21.170 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.171 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:21.172 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:21.173 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:21.174 09:26:48 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:21.174 09:26:48 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:21.174 09:26:48 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:21.174 09:26:48 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.174 09:26:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.175 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:21.176 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:21.177 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.178 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.179 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:21.179 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:21.179 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.179 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.179 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.179 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:21.179 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:21.179 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.179 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.179 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.179 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:21.179 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:21.179 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.179 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.179 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.179 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:21.179 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:21.179 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.450 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.450 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.450 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:21.450 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:21.450 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.450 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.450 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:21.450 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:21.450 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:21.450 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.450 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.450 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:21.450 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:21.450 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:21.450 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.450 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.450 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:21.450 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:21.450 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:21.450 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.450 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.450 09:26:48 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:21.450 09:26:48 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:21.450 09:26:48 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:21.450 09:26:48 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.451 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:21.452 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.453 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.454 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:21.455 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:21.456 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:21.457 09:26:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:21.458 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:21.459 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.460 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.461 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.462 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:21.462 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:21.462 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:21.462 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.462 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.462 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:21.462 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:21.462 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:21.462 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.462 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.462 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:21.462 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:21.462 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:21.462 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.462 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.462 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:21.462 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:21.462 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:21.462 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.462 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.462 09:26:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:21.462 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:21.462 09:26:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:21.462 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.462 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.462 09:26:48 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:21.462 09:26:48 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:21.462 09:26:48 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:21.462 09:26:48 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:21.462 09:26:48 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:21.462 09:26:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:21.462 09:26:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:21.462 09:26:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:21.462 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.462 09:26:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.462 09:26:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.462 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.463 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:21.464 09:26:49 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:21.464 09:26:49 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:21.464 09:26:49 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:21.464 09:26:49 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.464 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:21.465 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.466 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.467 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:21.468 09:26:49 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:21.468 09:26:49 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:21.469 09:26:49 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:21.469 09:26:49 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:21.469 09:26:49 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:21.469 09:26:49 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:21.469 09:26:49 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:21.469 09:26:49 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:21.469 09:26:49 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:21.469 09:26:49 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:21.469 09:26:49 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:21.469 09:26:49 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:21.469 09:26:49 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:21.469 09:26:49 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:21.469 09:26:49 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:21.469 09:26:49 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:21.469 09:26:49 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:21.469 09:26:49 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:21.469 09:26:49 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:21.469 09:26:49 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:21.469 09:26:49 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:21.469 09:26:49 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:21.469 09:26:49 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:21.469 09:26:49 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:21.469 09:26:49 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:21.469 09:26:49 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:21.469 09:26:49 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:21.469 09:26:49 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:21.469 09:26:49 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:21.469 09:26:49 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:21.469 09:26:49 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:21.469 09:26:49 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:21.469 09:26:49 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:21.469 09:26:49 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:21.469 09:26:49 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:21.469 09:26:49 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:21.469 09:26:49 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:21.469 09:26:49 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:21.469 09:26:49 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:21.469 09:26:49 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:21.469 09:26:49 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:21.469 09:26:49 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:21.469 09:26:49 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:21.469 09:26:49 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:21.469 09:26:49 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:21.469 09:26:49 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:21.469 09:26:49 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:21.469 09:26:49 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:21.469 09:26:49 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:21.469 09:26:49 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:21.469 09:26:49 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:21.469 09:26:49 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:21.469 09:26:49 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:21.469 09:26:49 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:21.469 09:26:49 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:21.469 09:26:49 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:21.469 09:26:49 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:21.469 09:26:49 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:21.469 09:26:49 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:21.469 09:26:49 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:21.469 09:26:49 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:21.469 09:26:49 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:22.042 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:22.614 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:22.614 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:22.614 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:22.614 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:22.614 09:26:50 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:22.614 09:26:50 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:22.614 09:26:50 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:22.614 09:26:50 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:22.614 ************************************ 00:09:22.614 START TEST nvme_flexible_data_placement 00:09:22.614 ************************************ 00:09:22.614 09:26:50 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:22.877 Initializing NVMe Controllers 00:09:22.877 Attaching to 0000:00:13.0 00:09:22.877 Controller supports FDP Attached to 0000:00:13.0 00:09:22.877 Namespace ID: 1 Endurance Group ID: 1 00:09:22.877 Initialization complete. 00:09:22.877 00:09:22.877 ================================== 00:09:22.877 == FDP tests for Namespace: #01 == 00:09:22.877 ================================== 00:09:22.877 00:09:22.877 Get Feature: FDP: 00:09:22.877 ================= 00:09:22.877 Enabled: Yes 00:09:22.877 FDP configuration Index: 0 00:09:22.877 00:09:22.877 FDP configurations log page 00:09:22.877 =========================== 00:09:22.877 Number of FDP configurations: 1 00:09:22.877 Version: 0 00:09:22.877 Size: 112 00:09:22.877 FDP Configuration Descriptor: 0 00:09:22.877 Descriptor Size: 96 00:09:22.877 Reclaim Group Identifier format: 2 00:09:22.877 FDP Volatile Write Cache: Not Present 00:09:22.877 FDP Configuration: Valid 00:09:22.877 Vendor Specific Size: 0 00:09:22.877 Number of Reclaim Groups: 2 00:09:22.877 Number of Recalim Unit Handles: 8 00:09:22.877 Max Placement Identifiers: 128 00:09:22.877 Number of Namespaces Suppprted: 256 00:09:22.877 Reclaim unit Nominal Size: 6000000 bytes 00:09:22.877 Estimated Reclaim Unit Time Limit: Not Reported 00:09:22.877 RUH Desc #000: RUH Type: Initially Isolated 00:09:22.877 RUH Desc #001: RUH Type: Initially Isolated 00:09:22.877 RUH Desc #002: RUH Type: Initially Isolated 00:09:22.877 RUH Desc #003: RUH Type: Initially Isolated 00:09:22.877 RUH Desc #004: RUH Type: Initially Isolated 00:09:22.877 RUH Desc #005: RUH Type: Initially Isolated 00:09:22.877 RUH Desc #006: RUH Type: Initially Isolated 00:09:22.877 RUH Desc #007: RUH Type: Initially Isolated 00:09:22.877 00:09:22.877 FDP reclaim unit handle usage log page 00:09:22.877 ====================================== 00:09:22.877 Number of Reclaim Unit Handles: 8 00:09:22.877 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:22.877 RUH Usage Desc #001: RUH Attributes: Unused 00:09:22.877 RUH Usage Desc #002: RUH Attributes: Unused 00:09:22.877 RUH Usage Desc #003: RUH Attributes: Unused 00:09:22.877 RUH Usage Desc #004: RUH Attributes: Unused 00:09:22.877 RUH Usage Desc #005: RUH Attributes: Unused 00:09:22.877 RUH Usage Desc #006: RUH Attributes: Unused 00:09:22.877 RUH Usage Desc #007: RUH Attributes: Unused 00:09:22.877 00:09:22.877 FDP statistics log page 00:09:22.877 ======================= 00:09:22.877 Host bytes with metadata written: 2108530688 00:09:22.877 Media bytes with metadata written: 2109652992 00:09:22.877 Media bytes erased: 0 00:09:22.877 00:09:22.877 FDP Reclaim unit handle status 00:09:22.877 ============================== 00:09:22.877 Number of RUHS descriptors: 2 00:09:22.877 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000000526 00:09:22.877 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:22.877 00:09:22.877 FDP write on placement id: 0 success 00:09:22.877 00:09:22.877 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:22.877 00:09:22.877 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:22.877 00:09:22.877 Get Feature: FDP Events for Placement handle: #0 00:09:22.877 ======================== 00:09:22.877 Number of FDP Events: 6 00:09:22.877 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:22.877 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:22.877 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:22.877 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:22.877 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:22.877 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:22.877 00:09:22.877 FDP events log page 00:09:22.877 =================== 00:09:22.877 Number of FDP events: 1 00:09:22.877 FDP Event #0: 00:09:22.877 Event Type: RU Not Written to Capacity 00:09:22.877 Placement Identifier: Valid 00:09:22.877 NSID: Valid 00:09:22.877 Location: Valid 00:09:22.877 Placement Identifier: 0 00:09:22.877 Event Timestamp: 4 00:09:22.877 Namespace Identifier: 1 00:09:22.877 Reclaim Group Identifier: 0 00:09:22.877 Reclaim Unit Handle Identifier: 0 00:09:22.877 00:09:22.877 FDP test passed 00:09:22.877 ************************************ 00:09:22.877 END TEST nvme_flexible_data_placement 00:09:22.877 ************************************ 00:09:22.877 00:09:22.877 real 0m0.233s 00:09:22.877 user 0m0.066s 00:09:22.877 sys 0m0.065s 00:09:22.877 09:26:50 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:22.877 09:26:50 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:22.877 ************************************ 00:09:22.877 END TEST nvme_fdp 00:09:22.877 ************************************ 00:09:22.877 00:09:22.877 real 0m7.803s 00:09:22.877 user 0m1.102s 00:09:22.877 sys 0m1.381s 00:09:22.877 09:26:50 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:22.877 09:26:50 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:22.877 09:26:50 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:22.877 09:26:50 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:22.877 09:26:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:22.877 09:26:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:22.877 09:26:50 -- common/autotest_common.sh@10 -- # set +x 00:09:22.877 ************************************ 00:09:22.877 START TEST nvme_rpc 00:09:22.877 ************************************ 00:09:22.877 09:26:50 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:23.139 * Looking for test storage... 00:09:23.139 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:23.139 09:26:50 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:23.139 09:26:50 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:23.139 09:26:50 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:23.139 09:26:50 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:23.140 09:26:50 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:23.140 09:26:50 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:23.140 09:26:50 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:23.140 09:26:50 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:23.140 09:26:50 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:23.140 09:26:50 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:23.140 09:26:50 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:23.140 09:26:50 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:23.140 09:26:50 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:23.140 09:26:50 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:23.140 09:26:50 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:23.140 09:26:50 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:23.140 09:26:50 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:23.140 09:26:50 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:23.140 09:26:50 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:23.140 09:26:50 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:23.140 09:26:50 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:23.140 09:26:50 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:23.140 09:26:50 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:23.140 09:26:50 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:23.140 09:26:50 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:23.140 09:26:50 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:23.140 09:26:50 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:23.140 09:26:50 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:23.140 09:26:50 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:23.140 09:26:50 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:23.140 09:26:50 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:23.140 09:26:50 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:23.140 09:26:50 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:23.140 09:26:50 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:23.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.140 --rc genhtml_branch_coverage=1 00:09:23.140 --rc genhtml_function_coverage=1 00:09:23.140 --rc genhtml_legend=1 00:09:23.140 --rc geninfo_all_blocks=1 00:09:23.140 --rc geninfo_unexecuted_blocks=1 00:09:23.140 00:09:23.140 ' 00:09:23.140 09:26:50 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:23.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.140 --rc genhtml_branch_coverage=1 00:09:23.140 --rc genhtml_function_coverage=1 00:09:23.140 --rc genhtml_legend=1 00:09:23.140 --rc geninfo_all_blocks=1 00:09:23.140 --rc geninfo_unexecuted_blocks=1 00:09:23.140 00:09:23.140 ' 00:09:23.140 09:26:50 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:23.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.140 --rc genhtml_branch_coverage=1 00:09:23.140 --rc genhtml_function_coverage=1 00:09:23.140 --rc genhtml_legend=1 00:09:23.140 --rc geninfo_all_blocks=1 00:09:23.140 --rc geninfo_unexecuted_blocks=1 00:09:23.140 00:09:23.140 ' 00:09:23.140 09:26:50 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:23.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.140 --rc genhtml_branch_coverage=1 00:09:23.140 --rc genhtml_function_coverage=1 00:09:23.140 --rc genhtml_legend=1 00:09:23.140 --rc geninfo_all_blocks=1 00:09:23.140 --rc geninfo_unexecuted_blocks=1 00:09:23.140 00:09:23.140 ' 00:09:23.140 09:26:50 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:23.140 09:26:50 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:23.140 09:26:50 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:23.140 09:26:50 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:09:23.140 09:26:50 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:23.140 09:26:50 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:23.140 09:26:50 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:23.140 09:26:50 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:09:23.140 09:26:50 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:23.140 09:26:50 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:23.140 09:26:50 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:23.140 09:26:50 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:23.140 09:26:50 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:23.140 09:26:50 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:23.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.140 09:26:50 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:23.140 09:26:50 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=79121 00:09:23.140 09:26:50 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:23.140 09:26:50 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:23.140 09:26:50 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 79121 00:09:23.140 09:26:50 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 79121 ']' 00:09:23.140 09:26:50 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.140 09:26:50 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:23.140 09:26:50 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.140 09:26:50 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:23.140 09:26:50 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:23.140 [2024-11-29 09:26:50.838280] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:23.140 [2024-11-29 09:26:50.838552] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79121 ] 00:09:23.401 [2024-11-29 09:26:50.971577] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:23.401 [2024-11-29 09:26:51.000647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:23.401 [2024-11-29 09:26:51.021049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:23.401 [2024-11-29 09:26:51.021080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.976 09:26:51 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:23.976 09:26:51 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:23.976 09:26:51 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:24.238 Nvme0n1 00:09:24.238 09:26:51 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:24.238 09:26:51 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:24.500 request: 00:09:24.500 { 00:09:24.500 "bdev_name": "Nvme0n1", 00:09:24.500 "filename": "non_existing_file", 00:09:24.500 "method": "bdev_nvme_apply_firmware", 00:09:24.500 "req_id": 1 00:09:24.500 } 00:09:24.500 Got JSON-RPC error response 00:09:24.500 response: 00:09:24.500 { 00:09:24.500 "code": -32603, 00:09:24.500 "message": "open file failed." 00:09:24.500 } 00:09:24.500 09:26:52 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:24.500 09:26:52 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:24.500 09:26:52 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:24.761 09:26:52 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:24.761 09:26:52 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 79121 00:09:24.761 09:26:52 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 79121 ']' 00:09:24.761 09:26:52 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 79121 00:09:24.761 09:26:52 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:09:24.761 09:26:52 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:24.761 09:26:52 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79121 00:09:24.761 killing process with pid 79121 00:09:24.761 09:26:52 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:24.761 09:26:52 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:24.761 09:26:52 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79121' 00:09:24.761 09:26:52 nvme_rpc -- common/autotest_common.sh@973 -- # kill 79121 00:09:24.761 09:26:52 nvme_rpc -- common/autotest_common.sh@978 -- # wait 79121 00:09:25.024 ************************************ 00:09:25.024 END TEST nvme_rpc 00:09:25.024 ************************************ 00:09:25.024 00:09:25.024 real 0m2.050s 00:09:25.024 user 0m3.997s 00:09:25.024 sys 0m0.444s 00:09:25.024 09:26:52 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.024 09:26:52 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:25.024 09:26:52 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:25.024 09:26:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:25.024 09:26:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:25.024 09:26:52 -- common/autotest_common.sh@10 -- # set +x 00:09:25.024 ************************************ 00:09:25.024 START TEST nvme_rpc_timeouts 00:09:25.024 ************************************ 00:09:25.025 09:26:52 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:25.025 * Looking for test storage... 00:09:25.025 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:25.025 09:26:52 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:25.025 09:26:52 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:09:25.025 09:26:52 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:25.287 09:26:52 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:25.287 09:26:52 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:25.287 09:26:52 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:25.287 09:26:52 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:25.287 09:26:52 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:25.287 09:26:52 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:25.287 09:26:52 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:25.287 09:26:52 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:25.287 09:26:52 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:25.287 09:26:52 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:25.287 09:26:52 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:25.287 09:26:52 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:25.287 09:26:52 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:25.287 09:26:52 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:25.287 09:26:52 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:25.287 09:26:52 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:25.287 09:26:52 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:25.287 09:26:52 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:25.287 09:26:52 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:25.287 09:26:52 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:25.287 09:26:52 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:25.287 09:26:52 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:25.287 09:26:52 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:25.287 09:26:52 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:25.287 09:26:52 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:25.287 09:26:52 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:25.287 09:26:52 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:25.287 09:26:52 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:25.287 09:26:52 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:25.287 09:26:52 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:25.287 09:26:52 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:25.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.287 --rc genhtml_branch_coverage=1 00:09:25.287 --rc genhtml_function_coverage=1 00:09:25.287 --rc genhtml_legend=1 00:09:25.287 --rc geninfo_all_blocks=1 00:09:25.287 --rc geninfo_unexecuted_blocks=1 00:09:25.287 00:09:25.287 ' 00:09:25.287 09:26:52 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:25.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.287 --rc genhtml_branch_coverage=1 00:09:25.287 --rc genhtml_function_coverage=1 00:09:25.287 --rc genhtml_legend=1 00:09:25.287 --rc geninfo_all_blocks=1 00:09:25.287 --rc geninfo_unexecuted_blocks=1 00:09:25.287 00:09:25.287 ' 00:09:25.287 09:26:52 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:25.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.287 --rc genhtml_branch_coverage=1 00:09:25.287 --rc genhtml_function_coverage=1 00:09:25.287 --rc genhtml_legend=1 00:09:25.287 --rc geninfo_all_blocks=1 00:09:25.287 --rc geninfo_unexecuted_blocks=1 00:09:25.287 00:09:25.287 ' 00:09:25.287 09:26:52 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:25.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.287 --rc genhtml_branch_coverage=1 00:09:25.287 --rc genhtml_function_coverage=1 00:09:25.287 --rc genhtml_legend=1 00:09:25.287 --rc geninfo_all_blocks=1 00:09:25.287 --rc geninfo_unexecuted_blocks=1 00:09:25.287 00:09:25.287 ' 00:09:25.287 09:26:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:25.287 09:26:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_79175 00:09:25.287 09:26:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_79175 00:09:25.288 09:26:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=79207 00:09:25.288 09:26:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:25.288 09:26:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 79207 00:09:25.288 09:26:52 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 79207 ']' 00:09:25.288 09:26:52 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.288 09:26:52 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:25.288 09:26:52 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.288 09:26:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:25.288 09:26:52 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:25.288 09:26:52 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:25.288 [2024-11-29 09:26:52.889268] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:25.288 [2024-11-29 09:26:52.889609] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79207 ] 00:09:25.550 [2024-11-29 09:26:53.025061] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:25.550 [2024-11-29 09:26:53.053804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:25.550 [2024-11-29 09:26:53.080733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:25.550 [2024-11-29 09:26:53.080857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.125 09:26:53 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:26.125 09:26:53 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:09:26.125 09:26:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:26.125 Checking default timeout settings: 00:09:26.125 09:26:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:26.386 09:26:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:26.386 Making settings changes with rpc: 00:09:26.386 09:26:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:26.648 09:26:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:26.648 Check default vs. modified settings: 00:09:26.648 09:26:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:27.220 09:26:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:27.220 09:26:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:27.220 09:26:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_79175 00:09:27.220 09:26:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:27.220 09:26:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:27.220 09:26:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:27.220 09:26:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_79175 00:09:27.220 09:26:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:27.220 09:26:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:27.220 Setting action_on_timeout is changed as expected. 00:09:27.220 09:26:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:27.220 09:26:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:27.220 09:26:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:27.220 09:26:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:27.220 09:26:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_79175 00:09:27.220 09:26:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:27.220 09:26:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:27.220 09:26:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:27.220 09:26:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_79175 00:09:27.220 09:26:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:27.220 09:26:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:27.220 Setting timeout_us is changed as expected. 00:09:27.220 09:26:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:27.220 09:26:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:27.220 09:26:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:27.220 09:26:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:27.220 09:26:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_79175 00:09:27.220 09:26:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:27.220 09:26:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:27.220 09:26:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:27.220 09:26:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_79175 00:09:27.220 09:26:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:27.220 09:26:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:27.220 Setting timeout_admin_us is changed as expected. 00:09:27.220 09:26:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:27.220 09:26:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:27.220 09:26:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:27.220 09:26:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:27.220 09:26:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_79175 /tmp/settings_modified_79175 00:09:27.220 09:26:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 79207 00:09:27.220 09:26:54 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 79207 ']' 00:09:27.220 09:26:54 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 79207 00:09:27.220 09:26:54 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:09:27.220 09:26:54 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:27.220 09:26:54 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79207 00:09:27.220 09:26:54 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:27.220 killing process with pid 79207 00:09:27.220 09:26:54 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:27.220 09:26:54 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79207' 00:09:27.220 09:26:54 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 79207 00:09:27.220 09:26:54 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 79207 00:09:27.492 RPC TIMEOUT SETTING TEST PASSED. 00:09:27.492 09:26:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:27.492 00:09:27.492 real 0m2.393s 00:09:27.492 user 0m4.792s 00:09:27.492 sys 0m0.524s 00:09:27.492 09:26:55 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:27.492 ************************************ 00:09:27.492 END TEST nvme_rpc_timeouts 00:09:27.492 ************************************ 00:09:27.492 09:26:55 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:27.492 09:26:55 -- spdk/autotest.sh@239 -- # uname -s 00:09:27.492 09:26:55 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:27.492 09:26:55 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:27.492 09:26:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:27.492 09:26:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:27.492 09:26:55 -- common/autotest_common.sh@10 -- # set +x 00:09:27.492 ************************************ 00:09:27.492 START TEST sw_hotplug 00:09:27.492 ************************************ 00:09:27.492 09:26:55 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:27.492 * Looking for test storage... 00:09:27.492 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:27.492 09:26:55 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:27.492 09:26:55 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:09:27.492 09:26:55 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:27.753 09:26:55 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:27.753 09:26:55 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:27.753 09:26:55 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:27.753 09:26:55 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:27.753 09:26:55 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:27.753 09:26:55 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:27.753 09:26:55 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:27.753 09:26:55 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:27.753 09:26:55 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:27.754 09:26:55 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:27.754 09:26:55 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:27.754 09:26:55 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:27.754 09:26:55 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:27.754 09:26:55 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:27.754 09:26:55 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:27.754 09:26:55 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:27.754 09:26:55 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:27.754 09:26:55 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:27.754 09:26:55 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:27.754 09:26:55 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:27.754 09:26:55 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:27.754 09:26:55 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:27.754 09:26:55 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:27.754 09:26:55 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:27.754 09:26:55 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:27.754 09:26:55 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:27.754 09:26:55 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:27.754 09:26:55 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:27.754 09:26:55 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:27.754 09:26:55 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:27.754 09:26:55 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:27.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.754 --rc genhtml_branch_coverage=1 00:09:27.754 --rc genhtml_function_coverage=1 00:09:27.754 --rc genhtml_legend=1 00:09:27.754 --rc geninfo_all_blocks=1 00:09:27.754 --rc geninfo_unexecuted_blocks=1 00:09:27.754 00:09:27.754 ' 00:09:27.754 09:26:55 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:27.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.754 --rc genhtml_branch_coverage=1 00:09:27.754 --rc genhtml_function_coverage=1 00:09:27.754 --rc genhtml_legend=1 00:09:27.754 --rc geninfo_all_blocks=1 00:09:27.754 --rc geninfo_unexecuted_blocks=1 00:09:27.754 00:09:27.754 ' 00:09:27.754 09:26:55 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:27.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.754 --rc genhtml_branch_coverage=1 00:09:27.754 --rc genhtml_function_coverage=1 00:09:27.754 --rc genhtml_legend=1 00:09:27.754 --rc geninfo_all_blocks=1 00:09:27.754 --rc geninfo_unexecuted_blocks=1 00:09:27.754 00:09:27.754 ' 00:09:27.754 09:26:55 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:27.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.754 --rc genhtml_branch_coverage=1 00:09:27.754 --rc genhtml_function_coverage=1 00:09:27.754 --rc genhtml_legend=1 00:09:27.754 --rc geninfo_all_blocks=1 00:09:27.754 --rc geninfo_unexecuted_blocks=1 00:09:27.754 00:09:27.754 ' 00:09:27.754 09:26:55 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:28.015 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:28.015 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:28.015 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:28.015 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:28.015 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:28.277 09:26:55 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:28.277 09:26:55 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:28.277 09:26:55 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:28.277 09:26:55 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:28.277 09:26:55 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:28.277 09:26:55 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:28.277 09:26:55 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:28.277 09:26:55 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:28.277 09:26:55 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:28.277 09:26:55 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:28.277 09:26:55 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:28.277 09:26:55 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:28.277 09:26:55 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:28.277 09:26:55 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:28.277 09:26:55 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:28.277 09:26:55 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:28.277 09:26:55 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:28.277 09:26:55 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:28.277 09:26:55 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:28.277 09:26:55 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:28.277 09:26:55 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:28.277 09:26:55 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:28.277 09:26:55 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:28.277 09:26:55 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:28.277 09:26:55 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:28.277 09:26:55 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:28.277 09:26:55 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:28.277 09:26:55 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:28.277 09:26:55 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:28.277 09:26:55 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:28.277 09:26:55 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:28.277 09:26:55 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:28.277 09:26:55 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:28.277 09:26:55 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:28.277 09:26:55 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:28.277 09:26:55 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:28.277 09:26:55 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:28.277 09:26:55 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:28.277 09:26:55 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:28.277 09:26:55 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:28.277 09:26:55 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:28.277 09:26:55 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:28.277 09:26:55 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:28.277 09:26:55 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:28.277 09:26:55 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:28.277 09:26:55 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:28.277 09:26:55 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:28.277 09:26:55 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:28.277 09:26:55 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:28.277 09:26:55 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:28.277 09:26:55 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:28.277 09:26:55 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:28.277 09:26:55 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:28.277 09:26:55 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:28.277 09:26:55 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:28.277 09:26:55 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:28.277 09:26:55 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:28.277 09:26:55 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:28.277 09:26:55 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:28.277 09:26:55 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:28.278 09:26:55 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:28.278 09:26:55 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:28.278 09:26:55 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:28.278 09:26:55 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:28.278 09:26:55 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:28.278 09:26:55 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:28.278 09:26:55 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:28.278 09:26:55 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:28.278 09:26:55 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:28.278 09:26:55 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:28.278 09:26:55 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:28.278 09:26:55 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:28.278 09:26:55 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:28.278 09:26:55 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:28.278 09:26:55 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:28.278 09:26:55 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:28.278 09:26:55 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:28.278 09:26:55 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:28.278 09:26:55 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:28.540 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:28.801 Waiting for block devices as requested 00:09:28.801 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:28.801 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:28.801 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:29.061 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:34.414 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:34.414 09:27:01 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:34.414 09:27:01 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:34.414 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:34.675 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:34.675 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:34.936 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:35.198 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:35.198 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:35.198 09:27:02 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:35.198 09:27:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:35.198 09:27:02 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:35.198 09:27:02 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:35.198 09:27:02 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=80056 00:09:35.198 09:27:02 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:35.198 09:27:02 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:35.198 09:27:02 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:35.198 09:27:02 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:35.198 09:27:02 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:09:35.198 09:27:02 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:09:35.198 09:27:02 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:09:35.198 09:27:02 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:09:35.198 09:27:02 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:09:35.198 09:27:02 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:35.198 09:27:02 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:35.198 09:27:02 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:35.198 09:27:02 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:35.198 09:27:02 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:35.460 Initializing NVMe Controllers 00:09:35.460 Attaching to 0000:00:10.0 00:09:35.460 Attaching to 0000:00:11.0 00:09:35.460 Attached to 0000:00:10.0 00:09:35.460 Attached to 0000:00:11.0 00:09:35.460 Initialization complete. Starting I/O... 00:09:35.460 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:35.460 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:35.460 00:09:36.405 QEMU NVMe Ctrl (12340 ): 2500 I/Os completed (+2500) 00:09:36.406 QEMU NVMe Ctrl (12341 ): 2507 I/Os completed (+2507) 00:09:36.406 00:09:37.781 QEMU NVMe Ctrl (12340 ): 7098 I/Os completed (+4598) 00:09:37.781 QEMU NVMe Ctrl (12341 ): 7107 I/Os completed (+4600) 00:09:37.781 00:09:38.743 QEMU NVMe Ctrl (12340 ): 11393 I/Os completed (+4295) 00:09:38.743 QEMU NVMe Ctrl (12341 ): 11397 I/Os completed (+4290) 00:09:38.743 00:09:39.677 QEMU NVMe Ctrl (12340 ): 15685 I/Os completed (+4292) 00:09:39.677 QEMU NVMe Ctrl (12341 ): 15646 I/Os completed (+4249) 00:09:39.677 00:09:40.610 QEMU NVMe Ctrl (12340 ): 19967 I/Os completed (+4282) 00:09:40.610 QEMU NVMe Ctrl (12341 ): 19898 I/Os completed (+4252) 00:09:40.610 00:09:41.554 09:27:08 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:41.554 09:27:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:41.554 09:27:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:41.554 [2024-11-29 09:27:08.913689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:41.554 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:41.554 [2024-11-29 09:27:08.915015] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.554 [2024-11-29 09:27:08.915223] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.554 [2024-11-29 09:27:08.915271] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.554 [2024-11-29 09:27:08.915338] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.554 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:41.554 [2024-11-29 09:27:08.916860] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.554 [2024-11-29 09:27:08.917014] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.554 [2024-11-29 09:27:08.917055] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.554 [2024-11-29 09:27:08.917578] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.554 09:27:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:41.554 09:27:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:41.554 [2024-11-29 09:27:08.940374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:41.554 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:41.554 [2024-11-29 09:27:08.941676] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.554 [2024-11-29 09:27:08.941828] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.554 [2024-11-29 09:27:08.941865] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.554 [2024-11-29 09:27:08.941924] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.554 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:41.554 [2024-11-29 09:27:08.943216] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.554 [2024-11-29 09:27:08.943363] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.554 [2024-11-29 09:27:08.943430] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.554 [2024-11-29 09:27:08.943449] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.554 09:27:08 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:41.554 09:27:08 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:41.554 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:41.554 EAL: Scan for (pci) bus failed. 00:09:41.554 09:27:09 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:41.554 09:27:09 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:41.554 09:27:09 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:41.554 00:09:41.554 09:27:09 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:41.554 09:27:09 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:41.554 09:27:09 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:41.554 09:27:09 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:41.554 09:27:09 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:41.554 Attaching to 0000:00:10.0 00:09:41.554 Attached to 0000:00:10.0 00:09:41.554 09:27:09 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:41.554 09:27:09 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:41.554 09:27:09 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:41.554 Attaching to 0000:00:11.0 00:09:41.554 Attached to 0000:00:11.0 00:09:42.500 QEMU NVMe Ctrl (12340 ): 3000 I/Os completed (+3000) 00:09:42.500 QEMU NVMe Ctrl (12341 ): 2699 I/Os completed (+2699) 00:09:42.500 00:09:43.444 QEMU NVMe Ctrl (12340 ): 6108 I/Os completed (+3108) 00:09:43.444 QEMU NVMe Ctrl (12341 ): 5807 I/Os completed (+3108) 00:09:43.444 00:09:44.391 QEMU NVMe Ctrl (12340 ): 9260 I/Os completed (+3152) 00:09:44.391 QEMU NVMe Ctrl (12341 ): 8965 I/Os completed (+3158) 00:09:44.391 00:09:45.768 QEMU NVMe Ctrl (12340 ): 13321 I/Os completed (+4061) 00:09:45.768 QEMU NVMe Ctrl (12341 ): 13018 I/Os completed (+4053) 00:09:45.768 00:09:46.700 QEMU NVMe Ctrl (12340 ): 17638 I/Os completed (+4317) 00:09:46.700 QEMU NVMe Ctrl (12341 ): 17313 I/Os completed (+4295) 00:09:46.700 00:09:47.632 QEMU NVMe Ctrl (12340 ): 21922 I/Os completed (+4284) 00:09:47.632 QEMU NVMe Ctrl (12341 ): 21579 I/Os completed (+4266) 00:09:47.632 00:09:48.563 QEMU NVMe Ctrl (12340 ): 26564 I/Os completed (+4642) 00:09:48.563 QEMU NVMe Ctrl (12341 ): 26126 I/Os completed (+4547) 00:09:48.563 00:09:49.499 QEMU NVMe Ctrl (12340 ): 32093 I/Os completed (+5529) 00:09:49.499 QEMU NVMe Ctrl (12341 ): 31258 I/Os completed (+5132) 00:09:49.499 00:09:50.434 QEMU NVMe Ctrl (12340 ): 37798 I/Os completed (+5705) 00:09:50.434 QEMU NVMe Ctrl (12341 ): 36501 I/Os completed (+5243) 00:09:50.434 00:09:51.815 QEMU NVMe Ctrl (12340 ): 40842 I/Os completed (+3044) 00:09:51.815 QEMU NVMe Ctrl (12341 ): 39546 I/Os completed (+3045) 00:09:51.815 00:09:52.755 QEMU NVMe Ctrl (12340 ): 43492 I/Os completed (+2650) 00:09:52.755 QEMU NVMe Ctrl (12341 ): 42213 I/Os completed (+2667) 00:09:52.755 00:09:53.696 QEMU NVMe Ctrl (12340 ): 46268 I/Os completed (+2776) 00:09:53.696 QEMU NVMe Ctrl (12341 ): 44989 I/Os completed (+2776) 00:09:53.696 00:09:53.696 09:27:21 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:09:53.696 09:27:21 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:53.696 09:27:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:53.696 09:27:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:53.696 [2024-11-29 09:27:21.229340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:53.696 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:53.696 [2024-11-29 09:27:21.230888] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.696 [2024-11-29 09:27:21.231068] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.696 [2024-11-29 09:27:21.231113] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.696 [2024-11-29 09:27:21.231331] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.696 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:53.696 [2024-11-29 09:27:21.233898] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.696 [2024-11-29 09:27:21.233999] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.696 [2024-11-29 09:27:21.234040] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.696 [2024-11-29 09:27:21.234076] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.696 09:27:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:53.696 09:27:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:53.696 [2024-11-29 09:27:21.252407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:53.696 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:53.696 [2024-11-29 09:27:21.253896] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.696 [2024-11-29 09:27:21.254068] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.696 [2024-11-29 09:27:21.254147] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.696 [2024-11-29 09:27:21.254188] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.696 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:53.696 [2024-11-29 09:27:21.255788] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.696 [2024-11-29 09:27:21.255928] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.696 [2024-11-29 09:27:21.256000] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.696 [2024-11-29 09:27:21.256040] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.696 09:27:21 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:53.696 09:27:21 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:53.696 09:27:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:53.696 09:27:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:53.696 09:27:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:53.696 09:27:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:53.696 09:27:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:53.696 09:27:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:53.696 09:27:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:53.696 09:27:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:53.696 Attaching to 0000:00:10.0 00:09:53.696 Attached to 0000:00:10.0 00:09:53.957 09:27:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:53.957 09:27:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:53.957 09:27:21 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:53.957 Attaching to 0000:00:11.0 00:09:53.957 Attached to 0000:00:11.0 00:09:54.554 QEMU NVMe Ctrl (12340 ): 2005 I/Os completed (+2005) 00:09:54.554 QEMU NVMe Ctrl (12341 ): 1795 I/Os completed (+1795) 00:09:54.554 00:09:55.489 QEMU NVMe Ctrl (12340 ): 5957 I/Os completed (+3952) 00:09:55.489 QEMU NVMe Ctrl (12341 ): 5734 I/Os completed (+3939) 00:09:55.489 00:09:56.430 QEMU NVMe Ctrl (12340 ): 8960 I/Os completed (+3003) 00:09:56.430 QEMU NVMe Ctrl (12341 ): 8969 I/Os completed (+3235) 00:09:56.430 00:09:57.812 QEMU NVMe Ctrl (12340 ): 12216 I/Os completed (+3256) 00:09:57.812 QEMU NVMe Ctrl (12341 ): 12029 I/Os completed (+3060) 00:09:57.812 00:09:58.746 QEMU NVMe Ctrl (12340 ): 17175 I/Os completed (+4959) 00:09:58.746 QEMU NVMe Ctrl (12341 ): 17491 I/Os completed (+5462) 00:09:58.746 00:09:59.681 QEMU NVMe Ctrl (12340 ): 23591 I/Os completed (+6416) 00:09:59.681 QEMU NVMe Ctrl (12341 ): 24767 I/Os completed (+7276) 00:09:59.681 00:10:00.614 QEMU NVMe Ctrl (12340 ): 29828 I/Os completed (+6237) 00:10:00.614 QEMU NVMe Ctrl (12341 ): 31006 I/Os completed (+6239) 00:10:00.614 00:10:01.559 QEMU NVMe Ctrl (12340 ): 34131 I/Os completed (+4303) 00:10:01.559 QEMU NVMe Ctrl (12341 ): 35445 I/Os completed (+4439) 00:10:01.559 00:10:02.504 QEMU NVMe Ctrl (12340 ): 36966 I/Os completed (+2835) 00:10:02.504 QEMU NVMe Ctrl (12341 ): 38231 I/Os completed (+2786) 00:10:02.504 00:10:03.447 QEMU NVMe Ctrl (12340 ): 39957 I/Os completed (+2991) 00:10:03.447 QEMU NVMe Ctrl (12341 ): 41309 I/Os completed (+3078) 00:10:03.447 00:10:04.391 QEMU NVMe Ctrl (12340 ): 42678 I/Os completed (+2721) 00:10:04.391 QEMU NVMe Ctrl (12341 ): 44106 I/Os completed (+2797) 00:10:04.391 00:10:05.780 QEMU NVMe Ctrl (12340 ): 45396 I/Os completed (+2718) 00:10:05.780 QEMU NVMe Ctrl (12341 ): 46820 I/Os completed (+2714) 00:10:05.780 00:10:05.780 09:27:33 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:05.780 09:27:33 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:05.780 09:27:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:05.780 09:27:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:05.780 [2024-11-29 09:27:33.485520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:05.780 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:05.780 [2024-11-29 09:27:33.486533] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:05.780 [2024-11-29 09:27:33.486608] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:05.780 [2024-11-29 09:27:33.486644] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:05.780 [2024-11-29 09:27:33.486667] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:05.780 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:05.780 [2024-11-29 09:27:33.487873] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:05.780 [2024-11-29 09:27:33.488012] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:05.780 [2024-11-29 09:27:33.488042] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:05.780 [2024-11-29 09:27:33.488062] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.041 09:27:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:06.041 09:27:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:06.041 [2024-11-29 09:27:33.510060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:06.041 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:06.041 [2024-11-29 09:27:33.510904] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.041 [2024-11-29 09:27:33.510965] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.041 [2024-11-29 09:27:33.510991] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.041 [2024-11-29 09:27:33.511016] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.041 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:06.041 [2024-11-29 09:27:33.512018] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.041 [2024-11-29 09:27:33.512079] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.041 [2024-11-29 09:27:33.512104] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.041 [2024-11-29 09:27:33.512127] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.041 09:27:33 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:06.041 09:27:33 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:06.041 09:27:33 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:06.041 09:27:33 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:06.041 09:27:33 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:06.041 09:27:33 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:06.041 09:27:33 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:06.041 09:27:33 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:06.041 09:27:33 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:06.041 09:27:33 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:06.041 Attaching to 0000:00:10.0 00:10:06.041 Attached to 0000:00:10.0 00:10:06.300 09:27:33 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:06.300 09:27:33 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:06.300 09:27:33 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:06.300 Attaching to 0000:00:11.0 00:10:06.300 Attached to 0000:00:11.0 00:10:06.300 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:06.301 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:06.301 [2024-11-29 09:27:33.796581] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:18.591 09:27:45 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:18.591 09:27:45 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:18.591 09:27:45 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.89 00:10:18.591 09:27:45 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.89 00:10:18.591 09:27:45 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:10:18.591 09:27:45 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.89 00:10:18.591 09:27:45 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.89 2 00:10:18.591 remove_attach_helper took 42.89s to complete (handling 2 nvme drive(s)) 09:27:45 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:25.165 09:27:51 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 80056 00:10:25.165 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (80056) - No such process 00:10:25.165 09:27:51 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 80056 00:10:25.165 09:27:51 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:25.165 09:27:51 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:25.165 09:27:51 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:25.165 09:27:51 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=80605 00:10:25.165 09:27:51 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:25.165 09:27:51 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:25.165 09:27:51 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 80605 00:10:25.165 09:27:51 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 80605 ']' 00:10:25.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.166 09:27:51 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.166 09:27:51 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:25.166 09:27:51 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.166 09:27:51 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:25.166 09:27:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:25.166 [2024-11-29 09:27:51.882732] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:10:25.166 [2024-11-29 09:27:51.882853] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80605 ] 00:10:25.166 [2024-11-29 09:27:52.016224] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:25.166 [2024-11-29 09:27:52.045014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.166 [2024-11-29 09:27:52.074124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.166 09:27:52 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:25.166 09:27:52 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:10:25.166 09:27:52 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:25.166 09:27:52 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.166 09:27:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:25.166 09:27:52 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.166 09:27:52 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:25.166 09:27:52 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:25.166 09:27:52 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:25.166 09:27:52 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:25.166 09:27:52 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:25.166 09:27:52 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:25.166 09:27:52 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:25.166 09:27:52 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:10:25.166 09:27:52 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:25.166 09:27:52 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:25.166 09:27:52 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:25.166 09:27:52 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:25.166 09:27:52 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:31.811 09:27:58 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:31.811 09:27:58 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:31.811 09:27:58 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:31.811 09:27:58 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:31.811 09:27:58 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:31.811 09:27:58 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:31.811 09:27:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:31.811 09:27:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:31.811 09:27:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:31.811 09:27:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:31.811 09:27:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:31.811 09:27:58 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.811 09:27:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:31.811 09:27:58 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.811 09:27:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:31.811 09:27:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:31.811 [2024-11-29 09:27:58.780265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:31.811 [2024-11-29 09:27:58.781350] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:31.811 [2024-11-29 09:27:58.781386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:31.811 [2024-11-29 09:27:58.781400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:31.811 [2024-11-29 09:27:58.781415] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:31.811 [2024-11-29 09:27:58.781436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:31.811 [2024-11-29 09:27:58.781447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:31.811 [2024-11-29 09:27:58.781453] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:31.811 [2024-11-29 09:27:58.781461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:31.811 [2024-11-29 09:27:58.781467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:31.811 [2024-11-29 09:27:58.781475] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:31.811 [2024-11-29 09:27:58.781481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:31.811 [2024-11-29 09:27:58.781488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:31.811 09:27:59 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:31.811 09:27:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:31.811 09:27:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:31.811 09:27:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:31.811 09:27:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:31.811 09:27:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:31.811 09:27:59 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.811 09:27:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:31.811 09:27:59 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.811 09:27:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:31.811 09:27:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:31.811 [2024-11-29 09:27:59.480263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:31.811 [2024-11-29 09:27:59.481312] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:31.811 [2024-11-29 09:27:59.481342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:31.811 [2024-11-29 09:27:59.481354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:31.811 [2024-11-29 09:27:59.481364] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:31.811 [2024-11-29 09:27:59.481373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:31.811 [2024-11-29 09:27:59.481380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:31.811 [2024-11-29 09:27:59.481388] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:31.811 [2024-11-29 09:27:59.481394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:31.811 [2024-11-29 09:27:59.481404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:31.811 [2024-11-29 09:27:59.481410] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:31.811 [2024-11-29 09:27:59.481426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:31.811 [2024-11-29 09:27:59.481432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:32.378 09:27:59 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:32.378 09:27:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:32.378 09:27:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:32.378 09:27:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:32.378 09:27:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:32.378 09:27:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:32.378 09:27:59 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.378 09:27:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:32.378 09:27:59 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.378 09:27:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:32.378 09:27:59 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:32.378 09:27:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:32.378 09:27:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:32.378 09:27:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:32.378 09:27:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:32.378 09:28:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:32.378 09:28:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:32.378 09:28:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:32.378 09:28:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:32.378 09:28:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:32.378 09:28:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:32.378 09:28:00 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:44.571 09:28:12 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:44.571 09:28:12 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:44.571 09:28:12 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:44.571 09:28:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:44.571 09:28:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:44.571 09:28:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:44.571 09:28:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.571 09:28:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:44.571 09:28:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.571 09:28:12 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:44.571 09:28:12 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:44.571 09:28:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:44.571 09:28:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:44.571 09:28:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:44.571 09:28:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:44.571 09:28:12 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:44.571 09:28:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:44.571 09:28:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:44.571 09:28:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:44.571 09:28:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:44.571 09:28:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:44.571 09:28:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.571 09:28:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:44.571 [2024-11-29 09:28:12.180455] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:44.571 [2024-11-29 09:28:12.181489] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.571 [2024-11-29 09:28:12.181519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:44.571 [2024-11-29 09:28:12.181530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:44.571 [2024-11-29 09:28:12.181543] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.571 [2024-11-29 09:28:12.181550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:44.571 [2024-11-29 09:28:12.181559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:44.571 [2024-11-29 09:28:12.181580] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.571 [2024-11-29 09:28:12.181598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:44.571 [2024-11-29 09:28:12.181605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:44.571 [2024-11-29 09:28:12.181612] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.571 [2024-11-29 09:28:12.181619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:44.571 [2024-11-29 09:28:12.181628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:44.571 09:28:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.571 09:28:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:44.571 09:28:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:45.138 [2024-11-29 09:28:12.680457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:45.138 [2024-11-29 09:28:12.681470] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.138 [2024-11-29 09:28:12.681499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:45.138 [2024-11-29 09:28:12.681510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:45.138 [2024-11-29 09:28:12.681520] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.138 [2024-11-29 09:28:12.681528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:45.138 [2024-11-29 09:28:12.681534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:45.138 [2024-11-29 09:28:12.681542] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.138 [2024-11-29 09:28:12.681548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:45.138 [2024-11-29 09:28:12.681557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:45.138 [2024-11-29 09:28:12.681564] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.138 [2024-11-29 09:28:12.681571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:45.138 [2024-11-29 09:28:12.681578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:45.138 09:28:12 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:45.138 09:28:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:45.138 09:28:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:45.138 09:28:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:45.138 09:28:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:45.138 09:28:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:45.138 09:28:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.138 09:28:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:45.138 09:28:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.138 09:28:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:45.138 09:28:12 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:45.138 09:28:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:45.138 09:28:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:45.138 09:28:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:45.397 09:28:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:45.397 09:28:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:45.397 09:28:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:45.397 09:28:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:45.397 09:28:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:45.397 09:28:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:45.397 09:28:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:45.397 09:28:12 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:57.605 09:28:24 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:57.605 09:28:24 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:57.605 09:28:24 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:57.605 09:28:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:57.605 09:28:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:57.605 09:28:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:57.605 09:28:24 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.605 09:28:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:57.605 09:28:24 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.605 09:28:24 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:57.605 09:28:24 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:57.605 09:28:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:57.605 09:28:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:57.605 09:28:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:57.605 09:28:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:57.605 09:28:25 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:57.605 09:28:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:57.605 09:28:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:57.605 09:28:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:57.605 09:28:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:57.605 09:28:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:57.605 09:28:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.605 09:28:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:57.605 09:28:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.605 09:28:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:57.605 09:28:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:57.605 [2024-11-29 09:28:25.081290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:57.605 [2024-11-29 09:28:25.082823] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.605 [2024-11-29 09:28:25.082871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:57.605 [2024-11-29 09:28:25.082887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.605 [2024-11-29 09:28:25.082909] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.605 [2024-11-29 09:28:25.082918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:57.605 [2024-11-29 09:28:25.082928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.605 [2024-11-29 09:28:25.082937] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.605 [2024-11-29 09:28:25.082949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:57.605 [2024-11-29 09:28:25.082958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.605 [2024-11-29 09:28:25.082969] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.605 [2024-11-29 09:28:25.082977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:57.605 [2024-11-29 09:28:25.082989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.864 09:28:25 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:57.864 09:28:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:57.864 09:28:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:57.864 09:28:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:57.864 09:28:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:57.864 09:28:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:57.864 09:28:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.864 09:28:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:58.153 09:28:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.153 09:28:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:58.153 09:28:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:58.153 [2024-11-29 09:28:25.781308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:58.153 [2024-11-29 09:28:25.782978] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.153 [2024-11-29 09:28:25.783033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:58.153 [2024-11-29 09:28:25.783052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.153 [2024-11-29 09:28:25.783071] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.153 [2024-11-29 09:28:25.783085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:58.153 [2024-11-29 09:28:25.783095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.153 [2024-11-29 09:28:25.783107] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.153 [2024-11-29 09:28:25.783116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:58.153 [2024-11-29 09:28:25.783130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.153 [2024-11-29 09:28:25.783138] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.153 [2024-11-29 09:28:25.783149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:58.153 [2024-11-29 09:28:25.783158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.413 09:28:26 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:58.413 09:28:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:58.413 09:28:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:58.413 09:28:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:58.413 09:28:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:58.413 09:28:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:58.413 09:28:26 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.413 09:28:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:58.413 09:28:26 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.671 09:28:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:58.671 09:28:26 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:58.671 09:28:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:58.671 09:28:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:58.671 09:28:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:58.671 09:28:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:58.671 09:28:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:58.671 09:28:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:58.671 09:28:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:58.671 09:28:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:58.671 09:28:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:58.671 09:28:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:58.671 09:28:26 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:10.868 09:28:38 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:10.868 09:28:38 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:10.868 09:28:38 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:10.868 09:28:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:10.868 09:28:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:10.868 09:28:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:10.868 09:28:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.868 09:28:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:10.868 09:28:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.868 09:28:38 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:10.868 09:28:38 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:10.868 09:28:38 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.71 00:11:10.868 09:28:38 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.71 00:11:10.868 09:28:38 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:10.868 09:28:38 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.71 00:11:10.868 09:28:38 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.71 2 00:11:10.868 remove_attach_helper took 45.71s to complete (handling 2 nvme drive(s)) 09:28:38 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:10.868 09:28:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.868 09:28:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:10.868 09:28:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.868 09:28:38 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:10.868 09:28:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.868 09:28:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:10.868 09:28:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.868 09:28:38 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:10.868 09:28:38 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:10.868 09:28:38 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:10.868 09:28:38 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:10.868 09:28:38 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:10.868 09:28:38 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:10.869 09:28:38 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:10.869 09:28:38 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:10.869 09:28:38 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:10.869 09:28:38 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:10.869 09:28:38 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:10.869 09:28:38 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:10.869 09:28:38 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:17.431 09:28:44 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:17.431 09:28:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:17.431 09:28:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:17.431 09:28:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:17.431 09:28:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:17.431 09:28:44 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:17.431 09:28:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:17.431 09:28:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:17.431 09:28:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:17.431 09:28:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:17.431 09:28:44 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.431 09:28:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:17.431 09:28:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:17.431 09:28:44 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.431 09:28:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:17.431 09:28:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:17.431 [2024-11-29 09:28:44.515278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:17.431 [2024-11-29 09:28:44.516514] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.431 [2024-11-29 09:28:44.516564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:17.432 [2024-11-29 09:28:44.516579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:17.432 [2024-11-29 09:28:44.516612] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.432 [2024-11-29 09:28:44.516623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:17.432 [2024-11-29 09:28:44.516634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:17.432 [2024-11-29 09:28:44.516642] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.432 [2024-11-29 09:28:44.516656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:17.432 [2024-11-29 09:28:44.516665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:17.432 [2024-11-29 09:28:44.516674] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.432 [2024-11-29 09:28:44.516683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:17.432 [2024-11-29 09:28:44.516694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:17.432 [2024-11-29 09:28:44.915282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:17.432 [2024-11-29 09:28:44.916611] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.432 [2024-11-29 09:28:44.916654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:17.432 [2024-11-29 09:28:44.916672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:17.432 [2024-11-29 09:28:44.916686] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.432 [2024-11-29 09:28:44.916699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:17.432 [2024-11-29 09:28:44.916709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:17.432 [2024-11-29 09:28:44.916721] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.432 [2024-11-29 09:28:44.916730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:17.432 [2024-11-29 09:28:44.916742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:17.432 [2024-11-29 09:28:44.916750] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.432 [2024-11-29 09:28:44.916766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:17.432 [2024-11-29 09:28:44.916775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:17.432 09:28:44 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:17.432 09:28:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:17.432 09:28:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:17.432 09:28:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:17.432 09:28:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:17.432 09:28:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:17.432 09:28:44 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.432 09:28:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:17.432 09:28:45 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.432 09:28:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:17.432 09:28:45 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:17.432 09:28:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:17.432 09:28:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:17.432 09:28:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:17.692 09:28:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:17.692 09:28:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:17.692 09:28:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:17.692 09:28:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:17.692 09:28:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:17.692 09:28:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:17.692 09:28:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:17.692 09:28:45 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:29.923 09:28:57 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:29.923 09:28:57 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:29.923 09:28:57 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:29.923 09:28:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:29.923 09:28:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:29.923 09:28:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:29.923 09:28:57 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.923 09:28:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:29.923 09:28:57 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.923 09:28:57 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:29.923 09:28:57 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:29.923 09:28:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:29.923 09:28:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:29.923 09:28:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:29.923 09:28:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:29.923 09:28:57 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:29.923 09:28:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:29.923 09:28:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:29.923 09:28:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:29.923 09:28:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:29.923 09:28:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:29.923 09:28:57 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.923 09:28:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:29.923 09:28:57 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.923 09:28:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:29.923 09:28:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:29.923 [2024-11-29 09:28:57.415454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:29.923 [2024-11-29 09:28:57.416290] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:29.923 [2024-11-29 09:28:57.416321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:29.923 [2024-11-29 09:28:57.416332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:29.923 [2024-11-29 09:28:57.416349] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:29.923 [2024-11-29 09:28:57.416356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:29.923 [2024-11-29 09:28:57.416365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:29.923 [2024-11-29 09:28:57.416372] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:29.923 [2024-11-29 09:28:57.416383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:29.923 [2024-11-29 09:28:57.416390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:29.923 [2024-11-29 09:28:57.416398] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:29.923 [2024-11-29 09:28:57.416406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:29.923 [2024-11-29 09:28:57.416414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.185 09:28:57 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:30.185 09:28:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:30.185 09:28:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:30.447 09:28:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:30.447 09:28:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:30.447 09:28:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:30.447 09:28:57 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.447 09:28:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:30.447 [2024-11-29 09:28:57.915432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:30.447 [2024-11-29 09:28:57.916227] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.447 [2024-11-29 09:28:57.916255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:30.447 [2024-11-29 09:28:57.916266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.447 [2024-11-29 09:28:57.916277] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.447 [2024-11-29 09:28:57.916287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:30.447 [2024-11-29 09:28:57.916294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.447 [2024-11-29 09:28:57.916302] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.447 [2024-11-29 09:28:57.916308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:30.447 [2024-11-29 09:28:57.916317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.447 [2024-11-29 09:28:57.916323] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.447 [2024-11-29 09:28:57.916332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:30.447 [2024-11-29 09:28:57.916338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.447 09:28:57 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.447 09:28:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:30.447 09:28:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:31.021 09:28:58 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:31.021 09:28:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:31.021 09:28:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:31.021 09:28:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:31.021 09:28:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:31.021 09:28:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:31.021 09:28:58 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.021 09:28:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:31.021 09:28:58 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.021 09:28:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:31.021 09:28:58 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:31.021 09:28:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:31.021 09:28:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:31.021 09:28:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:31.021 09:28:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:31.021 09:28:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:31.021 09:28:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:31.021 09:28:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:31.021 09:28:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:31.021 09:28:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:31.282 09:28:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:31.282 09:28:58 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:43.498 09:29:10 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:43.498 09:29:10 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:43.498 09:29:10 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:43.498 09:29:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:43.498 09:29:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:43.498 09:29:10 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.498 09:29:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:43.498 09:29:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:43.498 09:29:10 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.498 09:29:10 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:43.498 09:29:10 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:43.498 09:29:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:43.498 09:29:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:43.498 09:29:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:43.498 09:29:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:43.498 [2024-11-29 09:29:10.815679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:43.498 [2024-11-29 09:29:10.816472] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.498 [2024-11-29 09:29:10.816502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.498 [2024-11-29 09:29:10.816512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.498 [2024-11-29 09:29:10.816528] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.498 [2024-11-29 09:29:10.816535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.498 [2024-11-29 09:29:10.816543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.498 [2024-11-29 09:29:10.816550] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.498 [2024-11-29 09:29:10.816560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.498 [2024-11-29 09:29:10.816566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.498 [2024-11-29 09:29:10.816574] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.498 [2024-11-29 09:29:10.816580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.498 [2024-11-29 09:29:10.816598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.498 09:29:10 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:43.498 09:29:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:43.498 09:29:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:43.498 09:29:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:43.498 09:29:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:43.498 09:29:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:43.498 09:29:10 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.498 09:29:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:43.498 09:29:10 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.498 09:29:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:43.498 09:29:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:43.757 [2024-11-29 09:29:11.315678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:43.757 [2024-11-29 09:29:11.316391] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.757 [2024-11-29 09:29:11.316421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.757 [2024-11-29 09:29:11.316433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.757 [2024-11-29 09:29:11.316443] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.757 [2024-11-29 09:29:11.316452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.757 [2024-11-29 09:29:11.316459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.757 [2024-11-29 09:29:11.316469] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.757 [2024-11-29 09:29:11.316476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.757 [2024-11-29 09:29:11.316484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.757 [2024-11-29 09:29:11.316490] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.757 [2024-11-29 09:29:11.316498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.757 [2024-11-29 09:29:11.316504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.757 09:29:11 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:43.757 09:29:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:43.757 09:29:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:43.757 09:29:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:43.757 09:29:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:43.757 09:29:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:43.757 09:29:11 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.757 09:29:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:43.757 09:29:11 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.757 09:29:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:43.757 09:29:11 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:43.757 09:29:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:43.757 09:29:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:43.757 09:29:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:44.016 09:29:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:44.016 09:29:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:44.016 09:29:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:44.016 09:29:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:44.016 09:29:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:44.016 09:29:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:44.016 09:29:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:44.016 09:29:11 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:56.221 09:29:23 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:56.221 09:29:23 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:56.221 09:29:23 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:56.221 09:29:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:56.221 09:29:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:56.221 09:29:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:56.221 09:29:23 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.221 09:29:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:56.221 09:29:23 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.221 09:29:23 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:56.221 09:29:23 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:56.221 09:29:23 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.24 00:11:56.221 09:29:23 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.24 00:11:56.221 09:29:23 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:56.221 09:29:23 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.24 00:11:56.221 09:29:23 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.24 2 00:11:56.221 remove_attach_helper took 45.24s to complete (handling 2 nvme drive(s)) 09:29:23 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:11:56.221 09:29:23 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 80605 00:11:56.221 09:29:23 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 80605 ']' 00:11:56.221 09:29:23 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 80605 00:11:56.221 09:29:23 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:11:56.221 09:29:23 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:56.221 09:29:23 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80605 00:11:56.221 09:29:23 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:56.221 09:29:23 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:56.221 09:29:23 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80605' 00:11:56.222 killing process with pid 80605 00:11:56.222 09:29:23 sw_hotplug -- common/autotest_common.sh@973 -- # kill 80605 00:11:56.222 09:29:23 sw_hotplug -- common/autotest_common.sh@978 -- # wait 80605 00:11:56.222 09:29:23 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:56.794 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:57.056 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:57.056 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:57.056 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:57.318 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:57.318 00:11:57.318 real 2m29.755s 00:11:57.318 user 1m50.962s 00:11:57.318 sys 0m17.253s 00:11:57.318 09:29:24 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:57.318 09:29:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:57.318 ************************************ 00:11:57.318 END TEST sw_hotplug 00:11:57.318 ************************************ 00:11:57.318 09:29:24 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:11:57.318 09:29:24 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:57.318 09:29:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:57.318 09:29:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:57.318 09:29:24 -- common/autotest_common.sh@10 -- # set +x 00:11:57.318 ************************************ 00:11:57.318 START TEST nvme_xnvme 00:11:57.318 ************************************ 00:11:57.318 09:29:24 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:57.318 * Looking for test storage... 00:11:57.318 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:57.318 09:29:25 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:57.318 09:29:25 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:11:57.318 09:29:25 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:57.584 09:29:25 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:57.584 09:29:25 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:57.584 09:29:25 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:57.584 09:29:25 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:57.584 09:29:25 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:11:57.584 09:29:25 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:11:57.584 09:29:25 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:11:57.584 09:29:25 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:11:57.584 09:29:25 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:11:57.584 09:29:25 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:11:57.584 09:29:25 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:11:57.584 09:29:25 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:57.584 09:29:25 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:11:57.584 09:29:25 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:11:57.584 09:29:25 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:57.584 09:29:25 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:57.584 09:29:25 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:11:57.584 09:29:25 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:11:57.584 09:29:25 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:57.584 09:29:25 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:11:57.584 09:29:25 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:11:57.584 09:29:25 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:11:57.584 09:29:25 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:11:57.584 09:29:25 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:57.584 09:29:25 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:11:57.584 09:29:25 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:11:57.584 09:29:25 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:57.584 09:29:25 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:57.584 09:29:25 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:11:57.584 09:29:25 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:57.584 09:29:25 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:57.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.584 --rc genhtml_branch_coverage=1 00:11:57.584 --rc genhtml_function_coverage=1 00:11:57.584 --rc genhtml_legend=1 00:11:57.584 --rc geninfo_all_blocks=1 00:11:57.584 --rc geninfo_unexecuted_blocks=1 00:11:57.584 00:11:57.584 ' 00:11:57.584 09:29:25 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:57.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.584 --rc genhtml_branch_coverage=1 00:11:57.584 --rc genhtml_function_coverage=1 00:11:57.584 --rc genhtml_legend=1 00:11:57.584 --rc geninfo_all_blocks=1 00:11:57.584 --rc geninfo_unexecuted_blocks=1 00:11:57.584 00:11:57.584 ' 00:11:57.584 09:29:25 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:57.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.584 --rc genhtml_branch_coverage=1 00:11:57.584 --rc genhtml_function_coverage=1 00:11:57.584 --rc genhtml_legend=1 00:11:57.584 --rc geninfo_all_blocks=1 00:11:57.584 --rc geninfo_unexecuted_blocks=1 00:11:57.584 00:11:57.584 ' 00:11:57.584 09:29:25 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:57.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.584 --rc genhtml_branch_coverage=1 00:11:57.584 --rc genhtml_function_coverage=1 00:11:57.584 --rc genhtml_legend=1 00:11:57.584 --rc geninfo_all_blocks=1 00:11:57.584 --rc geninfo_unexecuted_blocks=1 00:11:57.584 00:11:57.584 ' 00:11:57.584 09:29:25 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:11:57.584 09:29:25 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:11:57.584 09:29:25 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:57.584 09:29:25 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:11:57.584 09:29:25 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:57.584 09:29:25 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:57.584 09:29:25 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:57.584 09:29:25 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:11:57.584 09:29:25 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:11:57.584 09:29:25 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:11:57.584 09:29:25 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:57.584 09:29:25 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:11:57.584 09:29:25 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:57.584 09:29:25 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:57.584 09:29:25 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:57.584 09:29:25 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:57.584 09:29:25 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:57.584 09:29:25 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:57.584 09:29:25 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:57.584 09:29:25 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:57.584 09:29:25 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:57.584 09:29:25 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:57.584 09:29:25 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:57.584 09:29:25 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:57.584 09:29:25 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:57.584 09:29:25 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:57.584 09:29:25 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:57.584 09:29:25 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:57.584 09:29:25 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:57.584 09:29:25 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:11:57.584 09:29:25 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:57.584 09:29:25 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:57.584 09:29:25 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:57.585 09:29:25 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:57.585 09:29:25 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:11:57.585 09:29:25 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:11:57.585 09:29:25 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:11:57.585 09:29:25 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:11:57.585 09:29:25 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:11:57.585 09:29:25 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:11:57.585 09:29:25 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:11:57.585 09:29:25 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:11:57.585 09:29:25 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:57.585 09:29:25 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:57.585 09:29:25 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:57.585 09:29:25 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:57.585 09:29:25 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:57.585 09:29:25 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:57.585 09:29:25 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:11:57.585 09:29:25 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:57.585 #define SPDK_CONFIG_H 00:11:57.585 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:57.585 #define SPDK_CONFIG_APPS 1 00:11:57.585 #define SPDK_CONFIG_ARCH native 00:11:57.585 #define SPDK_CONFIG_ASAN 1 00:11:57.585 #undef SPDK_CONFIG_AVAHI 00:11:57.585 #undef SPDK_CONFIG_CET 00:11:57.585 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:57.585 #define SPDK_CONFIG_COVERAGE 1 00:11:57.585 #define SPDK_CONFIG_CROSS_PREFIX 00:11:57.585 #undef SPDK_CONFIG_CRYPTO 00:11:57.585 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:57.585 #undef SPDK_CONFIG_CUSTOMOCF 00:11:57.585 #undef SPDK_CONFIG_DAOS 00:11:57.585 #define SPDK_CONFIG_DAOS_DIR 00:11:57.585 #define SPDK_CONFIG_DEBUG 1 00:11:57.585 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:57.585 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:11:57.585 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:11:57.585 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:11:57.585 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:57.585 #undef SPDK_CONFIG_DPDK_UADK 00:11:57.585 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:11:57.585 #define SPDK_CONFIG_EXAMPLES 1 00:11:57.585 #undef SPDK_CONFIG_FC 00:11:57.585 #define SPDK_CONFIG_FC_PATH 00:11:57.585 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:57.585 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:57.585 #define SPDK_CONFIG_FSDEV 1 00:11:57.585 #undef SPDK_CONFIG_FUSE 00:11:57.585 #undef SPDK_CONFIG_FUZZER 00:11:57.585 #define SPDK_CONFIG_FUZZER_LIB 00:11:57.585 #undef SPDK_CONFIG_GOLANG 00:11:57.585 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:57.585 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:57.585 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:57.585 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:57.585 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:57.585 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:57.585 #undef SPDK_CONFIG_HAVE_LZ4 00:11:57.585 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:57.585 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:57.585 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:57.585 #define SPDK_CONFIG_IDXD 1 00:11:57.585 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:57.585 #undef SPDK_CONFIG_IPSEC_MB 00:11:57.585 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:57.585 #define SPDK_CONFIG_ISAL 1 00:11:57.585 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:57.585 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:57.585 #define SPDK_CONFIG_LIBDIR 00:11:57.585 #undef SPDK_CONFIG_LTO 00:11:57.585 #define SPDK_CONFIG_MAX_LCORES 128 00:11:57.585 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:57.585 #define SPDK_CONFIG_NVME_CUSE 1 00:11:57.585 #undef SPDK_CONFIG_OCF 00:11:57.585 #define SPDK_CONFIG_OCF_PATH 00:11:57.585 #define SPDK_CONFIG_OPENSSL_PATH 00:11:57.585 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:57.585 #define SPDK_CONFIG_PGO_DIR 00:11:57.585 #undef SPDK_CONFIG_PGO_USE 00:11:57.585 #define SPDK_CONFIG_PREFIX /usr/local 00:11:57.585 #undef SPDK_CONFIG_RAID5F 00:11:57.585 #undef SPDK_CONFIG_RBD 00:11:57.585 #define SPDK_CONFIG_RDMA 1 00:11:57.585 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:57.585 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:57.585 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:57.585 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:57.585 #define SPDK_CONFIG_SHARED 1 00:11:57.585 #undef SPDK_CONFIG_SMA 00:11:57.585 #define SPDK_CONFIG_TESTS 1 00:11:57.585 #undef SPDK_CONFIG_TSAN 00:11:57.585 #define SPDK_CONFIG_UBLK 1 00:11:57.585 #define SPDK_CONFIG_UBSAN 1 00:11:57.585 #undef SPDK_CONFIG_UNIT_TESTS 00:11:57.585 #undef SPDK_CONFIG_URING 00:11:57.585 #define SPDK_CONFIG_URING_PATH 00:11:57.585 #undef SPDK_CONFIG_URING_ZNS 00:11:57.585 #undef SPDK_CONFIG_USDT 00:11:57.585 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:57.585 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:57.585 #undef SPDK_CONFIG_VFIO_USER 00:11:57.585 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:57.585 #define SPDK_CONFIG_VHOST 1 00:11:57.585 #define SPDK_CONFIG_VIRTIO 1 00:11:57.585 #undef SPDK_CONFIG_VTUNE 00:11:57.585 #define SPDK_CONFIG_VTUNE_DIR 00:11:57.585 #define SPDK_CONFIG_WERROR 1 00:11:57.585 #define SPDK_CONFIG_WPDK_DIR 00:11:57.585 #define SPDK_CONFIG_XNVME 1 00:11:57.585 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:57.585 09:29:25 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:57.585 09:29:25 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:57.585 09:29:25 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:11:57.586 09:29:25 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:57.586 09:29:25 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:57.586 09:29:25 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:57.586 09:29:25 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.586 09:29:25 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.586 09:29:25 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.586 09:29:25 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:11:57.586 09:29:25 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:11:57.586 09:29:25 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:11:57.586 09:29:25 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:11:57.586 09:29:25 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:11:57.586 09:29:25 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:11:57.586 09:29:25 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:11:57.586 09:29:25 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:11:57.586 09:29:25 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:11:57.586 09:29:25 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:11:57.586 09:29:25 nvme_xnvme -- pm/common@68 -- # uname -s 00:11:57.586 09:29:25 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:11:57.586 09:29:25 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:57.586 09:29:25 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:57.586 09:29:25 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:57.586 09:29:25 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:57.586 09:29:25 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:57.586 09:29:25 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:57.586 09:29:25 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:11:57.586 09:29:25 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:57.586 09:29:25 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:57.586 09:29:25 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:57.586 09:29:25 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:57.586 09:29:25 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:11:57.586 09:29:25 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@58 -- # : 1 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@126 -- # : /home/vagrant/spdk_repo/dpdk/build 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@140 -- # : main 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:11:57.586 09:29:25 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 81969 ]] 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 81969 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.NlBOSl 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.NlBOSl/tests/xnvme /tmp/spdk.NlBOSl 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13245800448 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=6339944448 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261964800 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=3428352 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:57.587 09:29:25 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493362176 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13245800448 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=6339944448 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265221120 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265397248 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=176128 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=98522992640 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=1179787264 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:57.588 * Looking for test storage... 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13245800448 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:57.588 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@1680 -- # set -o errtrace 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@1685 -- # true 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@1687 -- # xtrace_fd 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:57.588 09:29:25 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:57.588 09:29:25 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:57.588 09:29:25 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:57.588 09:29:25 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:11:57.588 09:29:25 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:11:57.588 09:29:25 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:11:57.588 09:29:25 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:11:57.588 09:29:25 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:11:57.588 09:29:25 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:11:57.588 09:29:25 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:11:57.588 09:29:25 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:57.588 09:29:25 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:11:57.588 09:29:25 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:11:57.588 09:29:25 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:57.588 09:29:25 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:57.588 09:29:25 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:11:57.588 09:29:25 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:11:57.588 09:29:25 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:57.588 09:29:25 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:11:57.588 09:29:25 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:11:57.588 09:29:25 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:11:57.588 09:29:25 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:11:57.588 09:29:25 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:57.588 09:29:25 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:11:57.588 09:29:25 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:11:57.588 09:29:25 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:57.588 09:29:25 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:57.588 09:29:25 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:57.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.588 --rc genhtml_branch_coverage=1 00:11:57.588 --rc genhtml_function_coverage=1 00:11:57.588 --rc genhtml_legend=1 00:11:57.588 --rc geninfo_all_blocks=1 00:11:57.588 --rc geninfo_unexecuted_blocks=1 00:11:57.588 00:11:57.588 ' 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:57.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.588 --rc genhtml_branch_coverage=1 00:11:57.588 --rc genhtml_function_coverage=1 00:11:57.588 --rc genhtml_legend=1 00:11:57.588 --rc geninfo_all_blocks=1 00:11:57.588 --rc geninfo_unexecuted_blocks=1 00:11:57.588 00:11:57.588 ' 00:11:57.588 09:29:25 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:57.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.588 --rc genhtml_branch_coverage=1 00:11:57.588 --rc genhtml_function_coverage=1 00:11:57.588 --rc genhtml_legend=1 00:11:57.588 --rc geninfo_all_blocks=1 00:11:57.588 --rc geninfo_unexecuted_blocks=1 00:11:57.588 00:11:57.588 ' 00:11:57.589 09:29:25 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:57.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.589 --rc genhtml_branch_coverage=1 00:11:57.589 --rc genhtml_function_coverage=1 00:11:57.589 --rc genhtml_legend=1 00:11:57.589 --rc geninfo_all_blocks=1 00:11:57.589 --rc geninfo_unexecuted_blocks=1 00:11:57.589 00:11:57.589 ' 00:11:57.589 09:29:25 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:57.589 09:29:25 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:11:57.589 09:29:25 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:57.589 09:29:25 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:57.589 09:29:25 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:57.589 09:29:25 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.589 09:29:25 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.589 09:29:25 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.589 09:29:25 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:11:57.589 09:29:25 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.589 09:29:25 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:11:57.589 09:29:25 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:11:57.589 09:29:25 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:11:57.589 09:29:25 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:11:57.589 09:29:25 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:11:57.589 09:29:25 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:11:57.589 09:29:25 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:11:57.589 09:29:25 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:11:57.589 09:29:25 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:11:57.589 09:29:25 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:11:57.589 09:29:25 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:11:57.589 09:29:25 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:11:57.589 09:29:25 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:11:57.589 09:29:25 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:11:57.589 09:29:25 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:11:57.589 09:29:25 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:11:57.589 09:29:25 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:11:57.589 09:29:25 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:11:57.589 09:29:25 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:11:57.589 09:29:25 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:11:57.589 09:29:25 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:11:57.589 09:29:25 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:58.161 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:58.161 Waiting for block devices as requested 00:11:58.161 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:58.161 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:58.421 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:58.421 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:03.713 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:03.713 09:29:31 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:12:03.974 09:29:31 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:12:03.974 09:29:31 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:12:04.235 09:29:31 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:12:04.235 09:29:31 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:12:04.235 09:29:31 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:12:04.235 09:29:31 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:12:04.235 09:29:31 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:12:04.235 No valid GPT data, bailing 00:12:04.235 09:29:31 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:12:04.235 09:29:31 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:12:04.235 09:29:31 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:12:04.235 09:29:31 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:12:04.235 09:29:31 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:12:04.235 09:29:31 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:12:04.235 09:29:31 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:12:04.235 09:29:31 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:12:04.235 09:29:31 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:12:04.235 09:29:31 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:04.235 09:29:31 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:12:04.235 09:29:31 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:12:04.235 09:29:31 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:12:04.235 09:29:31 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:04.235 09:29:31 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:12:04.235 09:29:31 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:12:04.235 09:29:31 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:04.235 09:29:31 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:04.235 09:29:31 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:04.235 09:29:31 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:04.235 ************************************ 00:12:04.235 START TEST xnvme_rpc 00:12:04.235 ************************************ 00:12:04.235 09:29:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:04.235 09:29:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:04.235 09:29:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:04.235 09:29:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:04.235 09:29:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:04.235 09:29:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=82362 00:12:04.235 09:29:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 82362 00:12:04.235 09:29:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 82362 ']' 00:12:04.235 09:29:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.235 09:29:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:04.235 09:29:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.235 09:29:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:04.235 09:29:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.235 09:29:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:04.235 [2024-11-29 09:29:31.868066] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:12:04.235 [2024-11-29 09:29:31.868211] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82362 ] 00:12:04.496 [2024-11-29 09:29:32.005507] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:04.496 [2024-11-29 09:29:32.033628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.496 [2024-11-29 09:29:32.062644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.066 09:29:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:05.066 09:29:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:05.066 09:29:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:12:05.066 09:29:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.066 09:29:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.066 xnvme_bdev 00:12:05.066 09:29:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.066 09:29:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:05.066 09:29:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:05.066 09:29:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:05.066 09:29:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.066 09:29:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.066 09:29:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.066 09:29:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:05.066 09:29:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:05.066 09:29:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:05.066 09:29:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:05.067 09:29:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.067 09:29:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.327 09:29:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.327 09:29:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:05.327 09:29:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:05.327 09:29:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:05.327 09:29:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.327 09:29:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.327 09:29:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:05.327 09:29:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.327 09:29:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:05.327 09:29:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:05.327 09:29:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:05.327 09:29:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.327 09:29:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.327 09:29:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:05.327 09:29:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.327 09:29:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:12:05.327 09:29:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:05.327 09:29:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.327 09:29:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.327 09:29:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.327 09:29:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 82362 00:12:05.327 09:29:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 82362 ']' 00:12:05.327 09:29:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 82362 00:12:05.327 09:29:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:05.327 09:29:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:05.327 09:29:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82362 00:12:05.327 09:29:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:05.327 09:29:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:05.327 killing process with pid 82362 00:12:05.327 09:29:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82362' 00:12:05.327 09:29:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 82362 00:12:05.327 09:29:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 82362 00:12:05.588 00:12:05.588 real 0m1.413s 00:12:05.588 user 0m1.482s 00:12:05.588 sys 0m0.390s 00:12:05.588 09:29:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:05.588 ************************************ 00:12:05.588 END TEST xnvme_rpc 00:12:05.588 ************************************ 00:12:05.588 09:29:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.588 09:29:33 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:05.588 09:29:33 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:05.588 09:29:33 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:05.588 09:29:33 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:05.588 ************************************ 00:12:05.588 START TEST xnvme_bdevperf 00:12:05.588 ************************************ 00:12:05.588 09:29:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:05.588 09:29:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:05.588 09:29:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:05.588 09:29:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:05.588 09:29:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:05.588 09:29:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:05.588 09:29:33 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:05.588 09:29:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:05.588 { 00:12:05.588 "subsystems": [ 00:12:05.588 { 00:12:05.588 "subsystem": "bdev", 00:12:05.588 "config": [ 00:12:05.588 { 00:12:05.588 "params": { 00:12:05.588 "io_mechanism": "libaio", 00:12:05.588 "conserve_cpu": false, 00:12:05.588 "filename": "/dev/nvme0n1", 00:12:05.588 "name": "xnvme_bdev" 00:12:05.588 }, 00:12:05.588 "method": "bdev_xnvme_create" 00:12:05.588 }, 00:12:05.588 { 00:12:05.588 "method": "bdev_wait_for_examine" 00:12:05.588 } 00:12:05.588 ] 00:12:05.588 } 00:12:05.588 ] 00:12:05.588 } 00:12:05.849 [2024-11-29 09:29:33.321070] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:12:05.849 [2024-11-29 09:29:33.321208] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82419 ] 00:12:05.849 [2024-11-29 09:29:33.456493] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:05.849 [2024-11-29 09:29:33.488125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.849 [2024-11-29 09:29:33.517288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.111 Running I/O for 5 seconds... 00:12:08.002 26843.00 IOPS, 104.86 MiB/s [2024-11-29T09:29:36.675Z] 25964.50 IOPS, 101.42 MiB/s [2024-11-29T09:29:37.659Z] 25204.67 IOPS, 98.46 MiB/s [2024-11-29T09:29:38.656Z] 25262.75 IOPS, 98.68 MiB/s 00:12:10.930 Latency(us) 00:12:10.930 [2024-11-29T09:29:38.656Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:10.930 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:10.930 xnvme_bdev : 5.00 25658.12 100.23 0.00 0.00 2489.24 507.27 8065.97 00:12:10.930 [2024-11-29T09:29:38.656Z] =================================================================================================================== 00:12:10.930 [2024-11-29T09:29:38.656Z] Total : 25658.12 100.23 0.00 0.00 2489.24 507.27 8065.97 00:12:11.192 09:29:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:11.192 09:29:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:11.192 09:29:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:11.192 09:29:38 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:11.192 09:29:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:11.192 { 00:12:11.192 "subsystems": [ 00:12:11.192 { 00:12:11.192 "subsystem": "bdev", 00:12:11.192 "config": [ 00:12:11.192 { 00:12:11.192 "params": { 00:12:11.192 "io_mechanism": "libaio", 00:12:11.192 "conserve_cpu": false, 00:12:11.192 "filename": "/dev/nvme0n1", 00:12:11.192 "name": "xnvme_bdev" 00:12:11.192 }, 00:12:11.192 "method": "bdev_xnvme_create" 00:12:11.192 }, 00:12:11.192 { 00:12:11.192 "method": "bdev_wait_for_examine" 00:12:11.192 } 00:12:11.192 ] 00:12:11.192 } 00:12:11.192 ] 00:12:11.192 } 00:12:11.192 [2024-11-29 09:29:38.888818] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:12:11.192 [2024-11-29 09:29:38.888963] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82492 ] 00:12:11.454 [2024-11-29 09:29:39.024846] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:11.454 [2024-11-29 09:29:39.055965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.454 [2024-11-29 09:29:39.085012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.716 Running I/O for 5 seconds... 00:12:13.606 33120.00 IOPS, 129.38 MiB/s [2024-11-29T09:29:42.276Z] 21540.50 IOPS, 84.14 MiB/s [2024-11-29T09:29:43.220Z] 15627.67 IOPS, 61.05 MiB/s [2024-11-29T09:29:44.610Z] 12675.50 IOPS, 49.51 MiB/s [2024-11-29T09:29:44.610Z] 10908.60 IOPS, 42.61 MiB/s 00:12:16.884 Latency(us) 00:12:16.884 [2024-11-29T09:29:44.610Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:16.884 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:16.884 xnvme_bdev : 5.02 10877.59 42.49 0.00 0.00 5867.16 72.07 34078.72 00:12:16.884 [2024-11-29T09:29:44.610Z] =================================================================================================================== 00:12:16.884 [2024-11-29T09:29:44.610Z] Total : 10877.59 42.49 0.00 0.00 5867.16 72.07 34078.72 00:12:16.884 00:12:16.884 real 0m11.164s 00:12:16.884 user 0m5.799s 00:12:16.884 sys 0m4.252s 00:12:16.884 ************************************ 00:12:16.884 END TEST xnvme_bdevperf 00:12:16.884 ************************************ 00:12:16.884 09:29:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:16.884 09:29:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:16.884 09:29:44 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:16.884 09:29:44 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:16.884 09:29:44 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:16.884 09:29:44 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:16.884 ************************************ 00:12:16.884 START TEST xnvme_fio_plugin 00:12:16.884 ************************************ 00:12:16.884 09:29:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:16.884 09:29:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:16.884 09:29:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:16.884 09:29:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:16.884 09:29:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:16.885 09:29:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:16.885 09:29:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:16.885 09:29:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:16.885 09:29:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:16.885 09:29:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:16.885 09:29:44 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:16.885 09:29:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:16.885 09:29:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:16.885 09:29:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:16.885 09:29:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:16.885 09:29:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:16.885 09:29:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:16.885 09:29:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:16.885 09:29:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:16.885 09:29:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:16.885 09:29:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:16.885 09:29:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:16.885 09:29:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:16.885 09:29:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:16.885 { 00:12:16.885 "subsystems": [ 00:12:16.885 { 00:12:16.885 "subsystem": "bdev", 00:12:16.885 "config": [ 00:12:16.885 { 00:12:16.885 "params": { 00:12:16.885 "io_mechanism": "libaio", 00:12:16.885 "conserve_cpu": false, 00:12:16.885 "filename": "/dev/nvme0n1", 00:12:16.885 "name": "xnvme_bdev" 00:12:16.885 }, 00:12:16.885 "method": "bdev_xnvme_create" 00:12:16.885 }, 00:12:16.885 { 00:12:16.885 "method": "bdev_wait_for_examine" 00:12:16.885 } 00:12:16.885 ] 00:12:16.885 } 00:12:16.885 ] 00:12:16.885 } 00:12:17.146 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:17.146 fio-3.35 00:12:17.146 Starting 1 thread 00:12:22.439 00:12:22.439 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82600: Fri Nov 29 09:29:50 2024 00:12:22.439 read: IOPS=34.7k, BW=136MiB/s (142MB/s)(678MiB/5001msec) 00:12:22.439 slat (usec): min=4, max=2175, avg=20.56, stdev=86.90 00:12:22.439 clat (usec): min=73, max=11194, avg=1313.37, stdev=624.24 00:12:22.439 lat (usec): min=170, max=11199, avg=1333.93, stdev=619.72 00:12:22.439 clat percentiles (usec): 00:12:22.439 | 1.00th=[ 273], 5.00th=[ 478], 10.00th=[ 635], 20.00th=[ 832], 00:12:22.439 | 30.00th=[ 979], 40.00th=[ 1123], 50.00th=[ 1254], 60.00th=[ 1385], 00:12:22.439 | 70.00th=[ 1532], 80.00th=[ 1713], 90.00th=[ 2008], 95.00th=[ 2343], 00:12:22.439 | 99.00th=[ 3294], 99.50th=[ 3720], 99.90th=[ 5866], 99.95th=[ 7439], 00:12:22.439 | 99.99th=[ 9110] 00:12:22.439 bw ( KiB/s): min=122184, max=152616, per=99.17%, avg=137624.00, stdev=9773.58, samples=9 00:12:22.439 iops : min=30546, max=38154, avg=34406.00, stdev=2443.40, samples=9 00:12:22.439 lat (usec) : 100=0.01%, 250=0.73%, 500=4.78%, 750=9.80%, 1000=16.32% 00:12:22.439 lat (msec) : 2=58.19%, 4=9.83%, 10=0.35%, 20=0.01% 00:12:22.439 cpu : usr=41.28%, sys=47.74%, ctx=48, majf=0, minf=773 00:12:22.439 IO depths : 1=0.3%, 2=0.8%, 4=2.4%, 8=7.6%, 16=23.3%, 32=63.4%, >=64=2.3% 00:12:22.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:22.439 complete : 0=0.0%, 4=97.9%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.7%, >=64=0.0% 00:12:22.439 issued rwts: total=173504,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:22.439 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:22.439 00:12:22.439 Run status group 0 (all jobs): 00:12:22.439 READ: bw=136MiB/s (142MB/s), 136MiB/s-136MiB/s (142MB/s-142MB/s), io=678MiB (711MB), run=5001-5001msec 00:12:23.012 ----------------------------------------------------- 00:12:23.012 Suppressions used: 00:12:23.012 count bytes template 00:12:23.012 1 11 /usr/src/fio/parse.c 00:12:23.012 1 8 libtcmalloc_minimal.so 00:12:23.012 1 904 libcrypto.so 00:12:23.012 ----------------------------------------------------- 00:12:23.012 00:12:23.012 09:29:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:23.012 09:29:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:23.012 09:29:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:23.012 09:29:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:23.012 09:29:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:23.012 09:29:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:23.012 09:29:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:23.012 09:29:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:23.012 09:29:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:23.012 09:29:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:23.012 09:29:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:23.012 09:29:50 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:23.012 09:29:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:23.012 09:29:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:23.012 09:29:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:23.012 09:29:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:23.012 09:29:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:23.012 09:29:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:23.012 09:29:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:23.012 09:29:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:23.012 09:29:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:23.012 { 00:12:23.012 "subsystems": [ 00:12:23.012 { 00:12:23.012 "subsystem": "bdev", 00:12:23.012 "config": [ 00:12:23.012 { 00:12:23.012 "params": { 00:12:23.012 "io_mechanism": "libaio", 00:12:23.012 "conserve_cpu": false, 00:12:23.012 "filename": "/dev/nvme0n1", 00:12:23.012 "name": "xnvme_bdev" 00:12:23.012 }, 00:12:23.012 "method": "bdev_xnvme_create" 00:12:23.012 }, 00:12:23.012 { 00:12:23.012 "method": "bdev_wait_for_examine" 00:12:23.012 } 00:12:23.012 ] 00:12:23.012 } 00:12:23.012 ] 00:12:23.012 } 00:12:23.273 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:23.273 fio-3.35 00:12:23.273 Starting 1 thread 00:12:28.574 00:12:28.574 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82685: Fri Nov 29 09:29:56 2024 00:12:28.574 write: IOPS=31.7k, BW=124MiB/s (130MB/s)(620MiB/5001msec); 0 zone resets 00:12:28.574 slat (usec): min=4, max=1816, avg=18.23, stdev=62.02 00:12:28.574 clat (usec): min=9, max=16834, avg=1582.18, stdev=2278.02 00:12:28.574 lat (usec): min=69, max=16839, avg=1600.41, stdev=2276.26 00:12:28.574 clat percentiles (usec): 00:12:28.574 | 1.00th=[ 178], 5.00th=[ 351], 10.00th=[ 474], 20.00th=[ 644], 00:12:28.574 | 30.00th=[ 775], 40.00th=[ 889], 50.00th=[ 1004], 60.00th=[ 1123], 00:12:28.575 | 70.00th=[ 1270], 80.00th=[ 1483], 90.00th=[ 2024], 95.00th=[ 8094], 00:12:28.575 | 99.00th=[12125], 99.50th=[12911], 99.90th=[14222], 99.95th=[14746], 00:12:28.575 | 99.99th=[15533] 00:12:28.575 bw ( KiB/s): min=68384, max=168128, per=97.40%, avg=123653.00, stdev=44545.80, samples=9 00:12:28.575 iops : min=17096, max=42032, avg=30913.22, stdev=11136.43, samples=9 00:12:28.575 lat (usec) : 10=0.01%, 20=0.01%, 50=0.03%, 100=0.13%, 250=1.88% 00:12:28.575 lat (usec) : 500=9.36%, 750=16.46%, 1000=21.61% 00:12:28.575 lat (msec) : 2=40.34%, 4=4.10%, 10=2.85%, 20=3.23% 00:12:28.575 cpu : usr=50.50%, sys=35.74%, ctx=34, majf=0, minf=774 00:12:28.575 IO depths : 1=0.2%, 2=0.7%, 4=2.4%, 8=7.3%, 16=19.5%, 32=65.7%, >=64=4.1% 00:12:28.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.575 complete : 0=0.0%, 4=97.1%, 8=0.5%, 16=0.6%, 32=0.5%, 64=1.3%, >=64=0.0% 00:12:28.575 issued rwts: total=0,158722,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.575 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:28.575 00:12:28.575 Run status group 0 (all jobs): 00:12:28.575 WRITE: bw=124MiB/s (130MB/s), 124MiB/s-124MiB/s (130MB/s-130MB/s), io=620MiB (650MB), run=5001-5001msec 00:12:29.147 ----------------------------------------------------- 00:12:29.147 Suppressions used: 00:12:29.147 count bytes template 00:12:29.147 1 11 /usr/src/fio/parse.c 00:12:29.147 1 8 libtcmalloc_minimal.so 00:12:29.147 1 904 libcrypto.so 00:12:29.147 ----------------------------------------------------- 00:12:29.147 00:12:29.147 00:12:29.147 real 0m12.152s 00:12:29.147 user 0m5.749s 00:12:29.147 sys 0m4.780s 00:12:29.147 09:29:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:29.147 09:29:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:29.147 ************************************ 00:12:29.147 END TEST xnvme_fio_plugin 00:12:29.147 ************************************ 00:12:29.147 09:29:56 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:29.147 09:29:56 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:12:29.147 09:29:56 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:12:29.147 09:29:56 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:29.147 09:29:56 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:29.147 09:29:56 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:29.147 09:29:56 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:29.147 ************************************ 00:12:29.147 START TEST xnvme_rpc 00:12:29.147 ************************************ 00:12:29.147 09:29:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:29.147 09:29:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:29.147 09:29:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:29.147 09:29:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:29.147 09:29:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:29.147 09:29:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=82760 00:12:29.147 09:29:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 82760 00:12:29.147 09:29:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 82760 ']' 00:12:29.147 09:29:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:29.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:29.147 09:29:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:29.147 09:29:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:29.147 09:29:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:29.147 09:29:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.147 09:29:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:29.147 [2024-11-29 09:29:56.796479] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:12:29.147 [2024-11-29 09:29:56.796636] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82760 ] 00:12:29.408 [2024-11-29 09:29:56.933468] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:29.408 [2024-11-29 09:29:56.961989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.408 [2024-11-29 09:29:56.991632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.983 09:29:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:29.983 09:29:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:29.983 09:29:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:12:29.983 09:29:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.983 09:29:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.983 xnvme_bdev 00:12:29.983 09:29:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.983 09:29:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:29.983 09:29:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:29.983 09:29:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.983 09:29:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:29.983 09:29:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.983 09:29:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.983 09:29:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:30.244 09:29:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:30.244 09:29:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:30.244 09:29:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:30.244 09:29:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.244 09:29:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.244 09:29:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.244 09:29:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:30.244 09:29:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:30.244 09:29:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:30.244 09:29:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.244 09:29:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.245 09:29:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:30.245 09:29:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.245 09:29:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:30.245 09:29:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:30.245 09:29:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:30.245 09:29:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.245 09:29:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.245 09:29:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:30.245 09:29:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.245 09:29:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:12:30.245 09:29:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:30.245 09:29:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.245 09:29:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.245 09:29:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.245 09:29:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 82760 00:12:30.245 09:29:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 82760 ']' 00:12:30.245 09:29:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 82760 00:12:30.245 09:29:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:30.245 09:29:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:30.245 09:29:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82760 00:12:30.245 09:29:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:30.245 killing process with pid 82760 00:12:30.245 09:29:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:30.245 09:29:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82760' 00:12:30.245 09:29:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 82760 00:12:30.245 09:29:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 82760 00:12:30.506 00:12:30.506 real 0m1.439s 00:12:30.506 user 0m1.493s 00:12:30.506 sys 0m0.415s 00:12:30.507 ************************************ 00:12:30.507 END TEST xnvme_rpc 00:12:30.507 ************************************ 00:12:30.507 09:29:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:30.507 09:29:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.507 09:29:58 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:30.507 09:29:58 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:30.507 09:29:58 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:30.507 09:29:58 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:30.507 ************************************ 00:12:30.507 START TEST xnvme_bdevperf 00:12:30.507 ************************************ 00:12:30.507 09:29:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:30.507 09:29:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:30.507 09:29:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:30.507 09:29:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:30.507 09:29:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:30.507 09:29:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:30.507 09:29:58 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:30.507 09:29:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:30.816 { 00:12:30.816 "subsystems": [ 00:12:30.816 { 00:12:30.816 "subsystem": "bdev", 00:12:30.816 "config": [ 00:12:30.816 { 00:12:30.816 "params": { 00:12:30.816 "io_mechanism": "libaio", 00:12:30.816 "conserve_cpu": true, 00:12:30.816 "filename": "/dev/nvme0n1", 00:12:30.816 "name": "xnvme_bdev" 00:12:30.816 }, 00:12:30.816 "method": "bdev_xnvme_create" 00:12:30.816 }, 00:12:30.816 { 00:12:30.816 "method": "bdev_wait_for_examine" 00:12:30.816 } 00:12:30.816 ] 00:12:30.816 } 00:12:30.816 ] 00:12:30.816 } 00:12:30.816 [2024-11-29 09:29:58.292551] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:12:30.816 [2024-11-29 09:29:58.292736] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82823 ] 00:12:30.816 [2024-11-29 09:29:58.428893] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:30.816 [2024-11-29 09:29:58.451845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:30.816 [2024-11-29 09:29:58.480475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.079 Running I/O for 5 seconds... 00:12:32.967 32291.00 IOPS, 126.14 MiB/s [2024-11-29T09:30:01.637Z] 31750.00 IOPS, 124.02 MiB/s [2024-11-29T09:30:03.025Z] 31412.00 IOPS, 122.70 MiB/s [2024-11-29T09:30:03.969Z] 31790.00 IOPS, 124.18 MiB/s 00:12:36.243 Latency(us) 00:12:36.243 [2024-11-29T09:30:03.969Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:36.243 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:36.243 xnvme_bdev : 5.00 31994.63 124.98 0.00 0.00 1995.76 330.83 6654.42 00:12:36.243 [2024-11-29T09:30:03.969Z] =================================================================================================================== 00:12:36.243 [2024-11-29T09:30:03.969Z] Total : 31994.63 124.98 0.00 0.00 1995.76 330.83 6654.42 00:12:36.243 09:30:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:36.243 09:30:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:36.243 09:30:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:36.243 09:30:03 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:36.243 09:30:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:36.243 { 00:12:36.243 "subsystems": [ 00:12:36.243 { 00:12:36.243 "subsystem": "bdev", 00:12:36.243 "config": [ 00:12:36.243 { 00:12:36.243 "params": { 00:12:36.243 "io_mechanism": "libaio", 00:12:36.243 "conserve_cpu": true, 00:12:36.243 "filename": "/dev/nvme0n1", 00:12:36.243 "name": "xnvme_bdev" 00:12:36.243 }, 00:12:36.243 "method": "bdev_xnvme_create" 00:12:36.243 }, 00:12:36.243 { 00:12:36.243 "method": "bdev_wait_for_examine" 00:12:36.243 } 00:12:36.243 ] 00:12:36.243 } 00:12:36.243 ] 00:12:36.243 } 00:12:36.243 [2024-11-29 09:30:03.890102] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:12:36.243 [2024-11-29 09:30:03.890232] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82887 ] 00:12:36.504 [2024-11-29 09:30:04.024614] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:36.504 [2024-11-29 09:30:04.047418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:36.504 [2024-11-29 09:30:04.076225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.504 Running I/O for 5 seconds... 00:12:38.874 34829.00 IOPS, 136.05 MiB/s [2024-11-29T09:30:07.567Z] 34047.50 IOPS, 133.00 MiB/s [2024-11-29T09:30:08.511Z] 33321.67 IOPS, 130.16 MiB/s [2024-11-29T09:30:09.454Z] 33107.75 IOPS, 129.33 MiB/s [2024-11-29T09:30:09.454Z] 28554.40 IOPS, 111.54 MiB/s 00:12:41.728 Latency(us) 00:12:41.728 [2024-11-29T09:30:09.454Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:41.728 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:41.728 xnvme_bdev : 5.02 28465.62 111.19 0.00 0.00 2239.84 59.86 28029.24 00:12:41.728 [2024-11-29T09:30:09.454Z] =================================================================================================================== 00:12:41.728 [2024-11-29T09:30:09.454Z] Total : 28465.62 111.19 0.00 0.00 2239.84 59.86 28029.24 00:12:41.728 00:12:41.728 real 0m11.206s 00:12:41.728 user 0m3.753s 00:12:41.728 sys 0m5.650s 00:12:41.728 ************************************ 00:12:41.728 END TEST xnvme_bdevperf 00:12:41.728 ************************************ 00:12:41.728 09:30:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:41.728 09:30:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:41.988 09:30:09 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:41.988 09:30:09 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:41.988 09:30:09 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:41.988 09:30:09 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:41.988 ************************************ 00:12:41.988 START TEST xnvme_fio_plugin 00:12:41.988 ************************************ 00:12:41.988 09:30:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:41.988 09:30:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:41.988 09:30:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:41.988 09:30:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:41.988 09:30:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:41.988 09:30:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:41.988 09:30:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:41.988 09:30:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:41.988 09:30:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:41.988 09:30:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:41.988 09:30:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:41.988 09:30:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:41.988 09:30:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:41.988 09:30:09 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:41.988 09:30:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:41.988 09:30:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:41.988 09:30:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:41.988 09:30:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:41.988 09:30:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:41.988 09:30:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:41.988 09:30:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:41.988 09:30:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:41.988 09:30:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:41.988 09:30:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:41.988 { 00:12:41.988 "subsystems": [ 00:12:41.988 { 00:12:41.988 "subsystem": "bdev", 00:12:41.988 "config": [ 00:12:41.988 { 00:12:41.988 "params": { 00:12:41.988 "io_mechanism": "libaio", 00:12:41.988 "conserve_cpu": true, 00:12:41.988 "filename": "/dev/nvme0n1", 00:12:41.988 "name": "xnvme_bdev" 00:12:41.988 }, 00:12:41.988 "method": "bdev_xnvme_create" 00:12:41.988 }, 00:12:41.988 { 00:12:41.988 "method": "bdev_wait_for_examine" 00:12:41.988 } 00:12:41.988 ] 00:12:41.988 } 00:12:41.988 ] 00:12:41.988 } 00:12:41.988 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:41.988 fio-3.35 00:12:41.988 Starting 1 thread 00:12:48.574 00:12:48.574 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82995: Fri Nov 29 09:30:15 2024 00:12:48.574 read: IOPS=34.6k, BW=135MiB/s (142MB/s)(675MiB/5001msec) 00:12:48.574 slat (usec): min=4, max=2095, avg=20.06, stdev=90.43 00:12:48.574 clat (usec): min=77, max=4523, avg=1312.46, stdev=511.92 00:12:48.574 lat (usec): min=203, max=4685, avg=1332.52, stdev=503.45 00:12:48.574 clat percentiles (usec): 00:12:48.574 | 1.00th=[ 281], 5.00th=[ 510], 10.00th=[ 668], 20.00th=[ 881], 00:12:48.574 | 30.00th=[ 1045], 40.00th=[ 1172], 50.00th=[ 1303], 60.00th=[ 1418], 00:12:48.574 | 70.00th=[ 1549], 80.00th=[ 1713], 90.00th=[ 1926], 95.00th=[ 2147], 00:12:48.574 | 99.00th=[ 2802], 99.50th=[ 3032], 99.90th=[ 3654], 99.95th=[ 3818], 00:12:48.574 | 99.99th=[ 4293] 00:12:48.574 bw ( KiB/s): min=130128, max=148288, per=99.57%, avg=137607.11, stdev=6342.71, samples=9 00:12:48.574 iops : min=32532, max=37072, avg=34401.78, stdev=1585.68, samples=9 00:12:48.574 lat (usec) : 100=0.01%, 250=0.67%, 500=4.16%, 750=8.45%, 1000=13.74% 00:12:48.574 lat (msec) : 2=65.17%, 4=7.78%, 10=0.03% 00:12:48.574 cpu : usr=44.88%, sys=46.88%, ctx=13, majf=0, minf=773 00:12:48.574 IO depths : 1=0.6%, 2=1.3%, 4=3.3%, 8=8.7%, 16=23.4%, 32=60.7%, >=64=2.1% 00:12:48.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:48.574 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:12:48.574 issued rwts: total=172791,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:48.574 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:48.574 00:12:48.574 Run status group 0 (all jobs): 00:12:48.575 READ: bw=135MiB/s (142MB/s), 135MiB/s-135MiB/s (142MB/s-142MB/s), io=675MiB (708MB), run=5001-5001msec 00:12:48.575 ----------------------------------------------------- 00:12:48.575 Suppressions used: 00:12:48.575 count bytes template 00:12:48.575 1 11 /usr/src/fio/parse.c 00:12:48.575 1 8 libtcmalloc_minimal.so 00:12:48.575 1 904 libcrypto.so 00:12:48.575 ----------------------------------------------------- 00:12:48.575 00:12:48.575 09:30:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:48.575 09:30:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:48.575 09:30:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:48.575 09:30:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:48.575 09:30:15 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:48.575 09:30:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:48.575 09:30:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:48.575 09:30:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:48.575 09:30:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:48.575 09:30:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:48.575 09:30:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:48.575 09:30:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:48.575 09:30:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:48.575 09:30:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:48.575 09:30:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:48.575 09:30:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:48.575 09:30:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:48.575 09:30:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:48.575 09:30:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:48.575 09:30:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:48.575 09:30:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:48.575 { 00:12:48.575 "subsystems": [ 00:12:48.575 { 00:12:48.575 "subsystem": "bdev", 00:12:48.575 "config": [ 00:12:48.575 { 00:12:48.575 "params": { 00:12:48.575 "io_mechanism": "libaio", 00:12:48.575 "conserve_cpu": true, 00:12:48.575 "filename": "/dev/nvme0n1", 00:12:48.575 "name": "xnvme_bdev" 00:12:48.575 }, 00:12:48.575 "method": "bdev_xnvme_create" 00:12:48.575 }, 00:12:48.575 { 00:12:48.575 "method": "bdev_wait_for_examine" 00:12:48.575 } 00:12:48.575 ] 00:12:48.575 } 00:12:48.575 ] 00:12:48.575 } 00:12:48.575 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:48.575 fio-3.35 00:12:48.575 Starting 1 thread 00:12:53.874 00:12:53.874 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=83076: Fri Nov 29 09:30:21 2024 00:12:53.874 write: IOPS=33.0k, BW=129MiB/s (135MB/s)(645MiB/5009msec); 0 zone resets 00:12:53.874 slat (usec): min=4, max=1816, avg=20.78, stdev=84.98 00:12:53.874 clat (usec): min=8, max=20629, avg=1393.26, stdev=1458.93 00:12:53.874 lat (usec): min=67, max=20634, avg=1414.03, stdev=1455.83 00:12:53.874 clat percentiles (usec): 00:12:53.874 | 1.00th=[ 247], 5.00th=[ 441], 10.00th=[ 594], 20.00th=[ 799], 00:12:53.874 | 30.00th=[ 955], 40.00th=[ 1090], 50.00th=[ 1205], 60.00th=[ 1336], 00:12:53.874 | 70.00th=[ 1467], 80.00th=[ 1647], 90.00th=[ 1926], 95.00th=[ 2278], 00:12:53.874 | 99.00th=[11469], 99.50th=[13566], 99.90th=[15795], 99.95th=[16319], 00:12:53.874 | 99.99th=[17433] 00:12:53.874 bw ( KiB/s): min=53309, max=155456, per=100.00%, avg=132130.90, stdev=29450.02, samples=10 00:12:53.874 iops : min=13327, max=38864, avg=33032.70, stdev=7362.58, samples=10 00:12:53.874 lat (usec) : 10=0.01%, 20=0.01%, 50=0.01%, 100=0.04%, 250=1.00% 00:12:53.874 lat (usec) : 500=5.55%, 750=10.72%, 1000=15.69% 00:12:53.874 lat (msec) : 2=58.48%, 4=7.06%, 10=0.23%, 20=1.22%, 50=0.01% 00:12:53.874 cpu : usr=43.77%, sys=46.71%, ctx=13, majf=0, minf=774 00:12:53.874 IO depths : 1=0.4%, 2=1.1%, 4=3.0%, 8=8.5%, 16=23.2%, 32=61.2%, >=64=2.7% 00:12:53.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:53.874 complete : 0=0.0%, 4=97.8%, 8=0.2%, 16=0.2%, 32=0.2%, 64=1.6%, >=64=0.0% 00:12:53.874 issued rwts: total=0,165236,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:53.874 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:53.874 00:12:53.874 Run status group 0 (all jobs): 00:12:53.874 WRITE: bw=129MiB/s (135MB/s), 129MiB/s-129MiB/s (135MB/s-135MB/s), io=645MiB (677MB), run=5009-5009msec 00:12:54.136 ----------------------------------------------------- 00:12:54.136 Suppressions used: 00:12:54.136 count bytes template 00:12:54.136 1 11 /usr/src/fio/parse.c 00:12:54.136 1 8 libtcmalloc_minimal.so 00:12:54.136 1 904 libcrypto.so 00:12:54.136 ----------------------------------------------------- 00:12:54.136 00:12:54.136 00:12:54.136 real 0m12.155s 00:12:54.136 user 0m5.594s 00:12:54.136 sys 0m5.277s 00:12:54.136 09:30:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:54.136 ************************************ 00:12:54.136 END TEST xnvme_fio_plugin 00:12:54.136 ************************************ 00:12:54.136 09:30:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:54.136 09:30:21 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:12:54.136 09:30:21 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:54.136 09:30:21 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:12:54.136 09:30:21 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:12:54.136 09:30:21 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:12:54.136 09:30:21 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:54.136 09:30:21 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:12:54.136 09:30:21 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:12:54.136 09:30:21 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:54.136 09:30:21 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:54.136 09:30:21 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:54.136 09:30:21 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:54.136 ************************************ 00:12:54.136 START TEST xnvme_rpc 00:12:54.136 ************************************ 00:12:54.136 09:30:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:54.136 09:30:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:54.136 09:30:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:54.136 09:30:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:54.136 09:30:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:54.136 09:30:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=83151 00:12:54.136 09:30:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 83151 00:12:54.136 09:30:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 83151 ']' 00:12:54.136 09:30:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.136 09:30:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:54.136 09:30:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:54.136 09:30:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.136 09:30:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:54.136 09:30:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.136 [2024-11-29 09:30:21.807119] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:12:54.137 [2024-11-29 09:30:21.807804] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83151 ] 00:12:54.398 [2024-11-29 09:30:21.944119] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:54.398 [2024-11-29 09:30:21.970625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:54.398 [2024-11-29 09:30:22.001422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.971 09:30:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:54.971 09:30:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:54.971 09:30:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:12:54.971 09:30:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.971 09:30:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.971 xnvme_bdev 00:12:54.971 09:30:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.971 09:30:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:54.971 09:30:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:54.971 09:30:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.971 09:30:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.971 09:30:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:54.971 09:30:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.233 09:30:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:55.233 09:30:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:55.233 09:30:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:55.233 09:30:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.233 09:30:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.233 09:30:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:55.233 09:30:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.233 09:30:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:55.233 09:30:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:55.233 09:30:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:55.233 09:30:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:55.233 09:30:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.233 09:30:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.233 09:30:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.233 09:30:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:12:55.233 09:30:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:55.233 09:30:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:55.233 09:30:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:55.233 09:30:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.233 09:30:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.233 09:30:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.233 09:30:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:12:55.233 09:30:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:55.233 09:30:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.233 09:30:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.233 09:30:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.233 09:30:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 83151 00:12:55.233 09:30:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 83151 ']' 00:12:55.233 09:30:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 83151 00:12:55.233 09:30:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:55.233 09:30:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:55.233 09:30:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83151 00:12:55.233 killing process with pid 83151 00:12:55.233 09:30:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:55.233 09:30:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:55.233 09:30:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83151' 00:12:55.233 09:30:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 83151 00:12:55.233 09:30:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 83151 00:12:55.495 00:12:55.495 real 0m1.424s 00:12:55.495 user 0m1.525s 00:12:55.495 sys 0m0.389s 00:12:55.495 09:30:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:55.495 09:30:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.495 ************************************ 00:12:55.495 END TEST xnvme_rpc 00:12:55.495 ************************************ 00:12:55.495 09:30:23 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:55.495 09:30:23 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:55.495 09:30:23 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:55.495 09:30:23 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:55.495 ************************************ 00:12:55.495 START TEST xnvme_bdevperf 00:12:55.495 ************************************ 00:12:55.495 09:30:23 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:55.495 09:30:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:55.495 09:30:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:12:55.495 09:30:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:55.495 09:30:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:55.495 09:30:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:55.495 09:30:23 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:55.495 09:30:23 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:55.757 { 00:12:55.757 "subsystems": [ 00:12:55.757 { 00:12:55.757 "subsystem": "bdev", 00:12:55.757 "config": [ 00:12:55.757 { 00:12:55.757 "params": { 00:12:55.757 "io_mechanism": "io_uring", 00:12:55.757 "conserve_cpu": false, 00:12:55.757 "filename": "/dev/nvme0n1", 00:12:55.757 "name": "xnvme_bdev" 00:12:55.757 }, 00:12:55.757 "method": "bdev_xnvme_create" 00:12:55.757 }, 00:12:55.757 { 00:12:55.757 "method": "bdev_wait_for_examine" 00:12:55.757 } 00:12:55.757 ] 00:12:55.757 } 00:12:55.757 ] 00:12:55.757 } 00:12:55.757 [2024-11-29 09:30:23.285264] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:12:55.757 [2024-11-29 09:30:23.285426] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83214 ] 00:12:55.757 [2024-11-29 09:30:23.422836] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:55.757 [2024-11-29 09:30:23.453084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.019 [2024-11-29 09:30:23.482947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.019 Running I/O for 5 seconds... 00:12:57.915 34184.00 IOPS, 133.53 MiB/s [2024-11-29T09:30:27.029Z] 34498.50 IOPS, 134.76 MiB/s [2024-11-29T09:30:27.602Z] 33835.67 IOPS, 132.17 MiB/s [2024-11-29T09:30:28.990Z] 33547.50 IOPS, 131.04 MiB/s [2024-11-29T09:30:28.990Z] 33468.20 IOPS, 130.74 MiB/s 00:13:01.264 Latency(us) 00:13:01.264 [2024-11-29T09:30:28.990Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:01.264 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:01.264 xnvme_bdev : 5.00 33457.58 130.69 0.00 0.00 1908.71 289.87 19055.85 00:13:01.264 [2024-11-29T09:30:28.990Z] =================================================================================================================== 00:13:01.264 [2024-11-29T09:30:28.990Z] Total : 33457.58 130.69 0.00 0.00 1908.71 289.87 19055.85 00:13:01.264 09:30:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:01.264 09:30:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:01.264 09:30:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:01.264 09:30:28 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:01.264 09:30:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:01.264 { 00:13:01.264 "subsystems": [ 00:13:01.264 { 00:13:01.264 "subsystem": "bdev", 00:13:01.264 "config": [ 00:13:01.264 { 00:13:01.264 "params": { 00:13:01.264 "io_mechanism": "io_uring", 00:13:01.264 "conserve_cpu": false, 00:13:01.264 "filename": "/dev/nvme0n1", 00:13:01.264 "name": "xnvme_bdev" 00:13:01.264 }, 00:13:01.264 "method": "bdev_xnvme_create" 00:13:01.264 }, 00:13:01.264 { 00:13:01.264 "method": "bdev_wait_for_examine" 00:13:01.264 } 00:13:01.264 ] 00:13:01.264 } 00:13:01.264 ] 00:13:01.264 } 00:13:01.264 [2024-11-29 09:30:28.858703] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:13:01.264 [2024-11-29 09:30:28.858837] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83278 ] 00:13:01.526 [2024-11-29 09:30:28.994626] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:01.526 [2024-11-29 09:30:29.021819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.526 [2024-11-29 09:30:29.050650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.526 Running I/O for 5 seconds... 00:13:03.859 6747.00 IOPS, 26.36 MiB/s [2024-11-29T09:30:32.531Z] 6788.00 IOPS, 26.52 MiB/s [2024-11-29T09:30:33.476Z] 6871.00 IOPS, 26.84 MiB/s [2024-11-29T09:30:34.419Z] 6952.50 IOPS, 27.16 MiB/s [2024-11-29T09:30:34.419Z] 7014.60 IOPS, 27.40 MiB/s 00:13:06.693 Latency(us) 00:13:06.693 [2024-11-29T09:30:34.419Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:06.693 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:06.693 xnvme_bdev : 5.01 7008.24 27.38 0.00 0.00 9116.35 65.77 30045.74 00:13:06.693 [2024-11-29T09:30:34.419Z] =================================================================================================================== 00:13:06.693 [2024-11-29T09:30:34.419Z] Total : 7008.24 27.38 0.00 0.00 9116.35 65.77 30045.74 00:13:06.693 ************************************ 00:13:06.693 END TEST xnvme_bdevperf 00:13:06.693 ************************************ 00:13:06.693 00:13:06.693 real 0m11.145s 00:13:06.693 user 0m4.308s 00:13:06.693 sys 0m6.586s 00:13:06.693 09:30:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:06.693 09:30:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:06.693 09:30:34 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:06.693 09:30:34 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:06.693 09:30:34 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:06.693 09:30:34 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:06.954 ************************************ 00:13:06.954 START TEST xnvme_fio_plugin 00:13:06.954 ************************************ 00:13:06.954 09:30:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:06.954 09:30:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:06.954 09:30:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:06.954 09:30:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:06.954 09:30:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:06.954 09:30:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:06.954 09:30:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:06.954 09:30:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:06.954 09:30:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:06.954 09:30:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:06.954 09:30:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:06.954 09:30:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:06.954 09:30:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:06.954 09:30:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:06.954 09:30:34 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:06.954 09:30:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:06.954 09:30:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:06.954 09:30:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:06.954 09:30:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:06.954 09:30:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:06.954 09:30:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:06.954 09:30:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:06.954 09:30:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:06.954 09:30:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:06.954 { 00:13:06.954 "subsystems": [ 00:13:06.954 { 00:13:06.954 "subsystem": "bdev", 00:13:06.954 "config": [ 00:13:06.954 { 00:13:06.954 "params": { 00:13:06.954 "io_mechanism": "io_uring", 00:13:06.954 "conserve_cpu": false, 00:13:06.954 "filename": "/dev/nvme0n1", 00:13:06.954 "name": "xnvme_bdev" 00:13:06.954 }, 00:13:06.954 "method": "bdev_xnvme_create" 00:13:06.954 }, 00:13:06.954 { 00:13:06.954 "method": "bdev_wait_for_examine" 00:13:06.954 } 00:13:06.954 ] 00:13:06.954 } 00:13:06.954 ] 00:13:06.954 } 00:13:06.954 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:06.954 fio-3.35 00:13:06.954 Starting 1 thread 00:13:13.566 00:13:13.566 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=83381: Fri Nov 29 09:30:40 2024 00:13:13.566 read: IOPS=33.9k, BW=132MiB/s (139MB/s)(662MiB/5001msec) 00:13:13.566 slat (nsec): min=2873, max=80946, avg=3620.72, stdev=2060.84 00:13:13.566 clat (usec): min=846, max=3651, avg=1741.74, stdev=352.34 00:13:13.566 lat (usec): min=849, max=3661, avg=1745.36, stdev=352.68 00:13:13.566 clat percentiles (usec): 00:13:13.566 | 1.00th=[ 1074], 5.00th=[ 1221], 10.00th=[ 1319], 20.00th=[ 1450], 00:13:13.566 | 30.00th=[ 1549], 40.00th=[ 1631], 50.00th=[ 1713], 60.00th=[ 1795], 00:13:13.566 | 70.00th=[ 1876], 80.00th=[ 2008], 90.00th=[ 2180], 95.00th=[ 2376], 00:13:13.566 | 99.00th=[ 2737], 99.50th=[ 2933], 99.90th=[ 3359], 99.95th=[ 3490], 00:13:13.566 | 99.99th=[ 3621] 00:13:13.566 bw ( KiB/s): min=127233, max=144384, per=99.61%, avg=135025.89, stdev=6533.01, samples=9 00:13:13.566 iops : min=31808, max=36096, avg=33756.44, stdev=1633.29, samples=9 00:13:13.566 lat (usec) : 1000=0.29% 00:13:13.566 lat (msec) : 2=79.14%, 4=20.57% 00:13:13.566 cpu : usr=30.74%, sys=68.06%, ctx=11, majf=0, minf=771 00:13:13.566 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:13:13.566 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:13.566 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:13:13.566 issued rwts: total=169472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:13.566 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:13.566 00:13:13.566 Run status group 0 (all jobs): 00:13:13.566 READ: bw=132MiB/s (139MB/s), 132MiB/s-132MiB/s (139MB/s-139MB/s), io=662MiB (694MB), run=5001-5001msec 00:13:13.566 ----------------------------------------------------- 00:13:13.566 Suppressions used: 00:13:13.566 count bytes template 00:13:13.566 1 11 /usr/src/fio/parse.c 00:13:13.566 1 8 libtcmalloc_minimal.so 00:13:13.566 1 904 libcrypto.so 00:13:13.566 ----------------------------------------------------- 00:13:13.566 00:13:13.566 09:30:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:13.566 09:30:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:13.566 09:30:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:13.566 09:30:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:13.566 09:30:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:13.566 09:30:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:13.566 09:30:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:13.566 09:30:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:13.566 09:30:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:13.566 09:30:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:13.566 09:30:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:13.566 09:30:40 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:13.566 09:30:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:13.566 09:30:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:13.566 09:30:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:13.566 09:30:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:13.566 09:30:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:13.566 09:30:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:13.566 09:30:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:13.566 09:30:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:13.566 09:30:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:13.566 { 00:13:13.566 "subsystems": [ 00:13:13.566 { 00:13:13.566 "subsystem": "bdev", 00:13:13.566 "config": [ 00:13:13.566 { 00:13:13.566 "params": { 00:13:13.566 "io_mechanism": "io_uring", 00:13:13.566 "conserve_cpu": false, 00:13:13.566 "filename": "/dev/nvme0n1", 00:13:13.566 "name": "xnvme_bdev" 00:13:13.566 }, 00:13:13.566 "method": "bdev_xnvme_create" 00:13:13.566 }, 00:13:13.566 { 00:13:13.566 "method": "bdev_wait_for_examine" 00:13:13.566 } 00:13:13.566 ] 00:13:13.566 } 00:13:13.566 ] 00:13:13.566 } 00:13:13.566 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:13.566 fio-3.35 00:13:13.566 Starting 1 thread 00:13:18.861 00:13:18.861 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=83467: Fri Nov 29 09:30:46 2024 00:13:18.861 write: IOPS=34.2k, BW=134MiB/s (140MB/s)(669MiB/5010msec); 0 zone resets 00:13:18.861 slat (nsec): min=2891, max=91293, avg=3764.33, stdev=2123.97 00:13:18.861 clat (usec): min=72, max=20243, avg=1722.05, stdev=1133.10 00:13:18.861 lat (usec): min=76, max=20252, avg=1725.82, stdev=1133.17 00:13:18.861 clat percentiles (usec): 00:13:18.861 | 1.00th=[ 668], 5.00th=[ 1156], 10.00th=[ 1254], 20.00th=[ 1352], 00:13:18.861 | 30.00th=[ 1434], 40.00th=[ 1516], 50.00th=[ 1582], 60.00th=[ 1680], 00:13:18.861 | 70.00th=[ 1762], 80.00th=[ 1860], 90.00th=[ 2040], 95.00th=[ 2245], 00:13:18.861 | 99.00th=[ 9634], 99.50th=[11863], 99.90th=[13698], 99.95th=[14484], 00:13:18.861 | 99.99th=[16188] 00:13:18.862 bw ( KiB/s): min=73864, max=163152, per=100.00%, avg=136960.00, stdev=23640.58, samples=10 00:13:18.862 iops : min=18466, max=40788, avg=34240.00, stdev=5910.14, samples=10 00:13:18.862 lat (usec) : 100=0.01%, 250=0.09%, 500=0.36%, 750=1.11%, 1000=0.77% 00:13:18.862 lat (msec) : 2=86.34%, 4=10.13%, 10=0.23%, 20=0.96%, 50=0.01% 00:13:18.862 cpu : usr=31.82%, sys=66.92%, ctx=14, majf=0, minf=772 00:13:18.862 IO depths : 1=1.5%, 2=3.0%, 4=6.0%, 8=12.0%, 16=24.1%, 32=51.4%, >=64=2.0% 00:13:18.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:18.862 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:18.862 issued rwts: total=0,171260,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:18.862 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:18.862 00:13:18.862 Run status group 0 (all jobs): 00:13:18.862 WRITE: bw=134MiB/s (140MB/s), 134MiB/s-134MiB/s (140MB/s-140MB/s), io=669MiB (701MB), run=5010-5010msec 00:13:18.862 ----------------------------------------------------- 00:13:18.862 Suppressions used: 00:13:18.862 count bytes template 00:13:18.862 1 11 /usr/src/fio/parse.c 00:13:18.862 1 8 libtcmalloc_minimal.so 00:13:18.862 1 904 libcrypto.so 00:13:18.862 ----------------------------------------------------- 00:13:18.862 00:13:18.862 00:13:18.862 real 0m12.051s 00:13:18.862 user 0m4.273s 00:13:18.862 sys 0m7.333s 00:13:18.862 09:30:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:18.862 ************************************ 00:13:18.862 END TEST xnvme_fio_plugin 00:13:18.862 ************************************ 00:13:18.862 09:30:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:18.862 09:30:46 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:18.862 09:30:46 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:13:18.862 09:30:46 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:13:18.862 09:30:46 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:18.862 09:30:46 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:18.862 09:30:46 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:18.862 09:30:46 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:18.862 ************************************ 00:13:18.862 START TEST xnvme_rpc 00:13:18.862 ************************************ 00:13:18.862 09:30:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:18.862 09:30:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:18.862 09:30:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:18.862 09:30:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:18.862 09:30:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:18.862 09:30:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=83548 00:13:18.862 09:30:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 83548 00:13:18.862 09:30:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 83548 ']' 00:13:18.862 09:30:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:18.862 09:30:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:18.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:18.862 09:30:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:18.862 09:30:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:18.862 09:30:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:18.862 09:30:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.123 [2024-11-29 09:30:46.624679] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:13:19.123 [2024-11-29 09:30:46.624958] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83548 ] 00:13:19.123 [2024-11-29 09:30:46.757542] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:19.123 [2024-11-29 09:30:46.787726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:19.123 [2024-11-29 09:30:46.808859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.068 09:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:20.068 09:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:20.068 09:30:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:13:20.068 09:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.068 09:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.068 xnvme_bdev 00:13:20.068 09:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.068 09:30:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:20.068 09:30:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:20.068 09:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.068 09:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.068 09:30:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:20.068 09:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.068 09:30:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:20.068 09:30:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:20.068 09:30:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:20.068 09:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.068 09:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.068 09:30:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:20.068 09:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.068 09:30:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:20.068 09:30:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:20.068 09:30:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:20.068 09:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.068 09:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.068 09:30:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:20.068 09:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.068 09:30:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:20.068 09:30:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:20.068 09:30:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:20.068 09:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.068 09:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.068 09:30:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:20.068 09:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.068 09:30:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:13:20.068 09:30:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:20.068 09:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.068 09:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.068 09:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.068 09:30:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 83548 00:13:20.068 09:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 83548 ']' 00:13:20.068 09:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 83548 00:13:20.068 09:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:20.068 09:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:20.068 09:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83548 00:13:20.068 killing process with pid 83548 00:13:20.068 09:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:20.068 09:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:20.068 09:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83548' 00:13:20.068 09:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 83548 00:13:20.068 09:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 83548 00:13:20.330 00:13:20.330 real 0m1.390s 00:13:20.330 user 0m1.497s 00:13:20.330 sys 0m0.365s 00:13:20.330 09:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:20.330 09:30:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.330 ************************************ 00:13:20.330 END TEST xnvme_rpc 00:13:20.330 ************************************ 00:13:20.330 09:30:47 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:20.330 09:30:47 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:20.330 09:30:47 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:20.330 09:30:47 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:20.330 ************************************ 00:13:20.330 START TEST xnvme_bdevperf 00:13:20.330 ************************************ 00:13:20.330 09:30:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:20.330 09:30:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:20.330 09:30:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:20.330 09:30:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:20.330 09:30:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:20.330 09:30:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:20.330 09:30:48 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:20.330 09:30:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:20.330 { 00:13:20.330 "subsystems": [ 00:13:20.330 { 00:13:20.330 "subsystem": "bdev", 00:13:20.330 "config": [ 00:13:20.330 { 00:13:20.330 "params": { 00:13:20.330 "io_mechanism": "io_uring", 00:13:20.330 "conserve_cpu": true, 00:13:20.330 "filename": "/dev/nvme0n1", 00:13:20.330 "name": "xnvme_bdev" 00:13:20.330 }, 00:13:20.330 "method": "bdev_xnvme_create" 00:13:20.330 }, 00:13:20.330 { 00:13:20.330 "method": "bdev_wait_for_examine" 00:13:20.330 } 00:13:20.330 ] 00:13:20.330 } 00:13:20.330 ] 00:13:20.330 } 00:13:20.591 [2024-11-29 09:30:48.074231] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:13:20.591 [2024-11-29 09:30:48.074368] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83600 ] 00:13:20.592 [2024-11-29 09:30:48.210812] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:20.592 [2024-11-29 09:30:48.238288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.592 [2024-11-29 09:30:48.267277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.852 Running I/O for 5 seconds... 00:13:22.740 34704.00 IOPS, 135.56 MiB/s [2024-11-29T09:30:51.408Z] 36164.50 IOPS, 141.27 MiB/s [2024-11-29T09:30:52.411Z] 35363.67 IOPS, 138.14 MiB/s [2024-11-29T09:30:53.807Z] 35214.25 IOPS, 137.56 MiB/s [2024-11-29T09:30:53.807Z] 35128.20 IOPS, 137.22 MiB/s 00:13:26.081 Latency(us) 00:13:26.081 [2024-11-29T09:30:53.807Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:26.081 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:26.081 xnvme_bdev : 5.01 35100.40 137.11 0.00 0.00 1819.06 359.19 13812.97 00:13:26.081 [2024-11-29T09:30:53.807Z] =================================================================================================================== 00:13:26.081 [2024-11-29T09:30:53.807Z] Total : 35100.40 137.11 0.00 0.00 1819.06 359.19 13812.97 00:13:26.081 09:30:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:26.081 09:30:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:26.082 09:30:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:26.082 09:30:53 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:26.082 09:30:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:26.082 { 00:13:26.082 "subsystems": [ 00:13:26.082 { 00:13:26.082 "subsystem": "bdev", 00:13:26.082 "config": [ 00:13:26.082 { 00:13:26.082 "params": { 00:13:26.082 "io_mechanism": "io_uring", 00:13:26.082 "conserve_cpu": true, 00:13:26.082 "filename": "/dev/nvme0n1", 00:13:26.082 "name": "xnvme_bdev" 00:13:26.082 }, 00:13:26.082 "method": "bdev_xnvme_create" 00:13:26.082 }, 00:13:26.082 { 00:13:26.082 "method": "bdev_wait_for_examine" 00:13:26.082 } 00:13:26.082 ] 00:13:26.082 } 00:13:26.082 ] 00:13:26.082 } 00:13:26.082 [2024-11-29 09:30:53.629050] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:13:26.082 [2024-11-29 09:30:53.629187] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83668 ] 00:13:26.082 [2024-11-29 09:30:53.765414] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:26.082 [2024-11-29 09:30:53.795972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.344 [2024-11-29 09:30:53.824934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:26.344 Running I/O for 5 seconds... 00:13:28.236 9912.00 IOPS, 38.72 MiB/s [2024-11-29T09:30:57.349Z] 9945.00 IOPS, 38.85 MiB/s [2024-11-29T09:30:58.291Z] 10113.67 IOPS, 39.51 MiB/s [2024-11-29T09:30:59.236Z] 10256.00 IOPS, 40.06 MiB/s [2024-11-29T09:30:59.236Z] 10278.80 IOPS, 40.15 MiB/s 00:13:31.510 Latency(us) 00:13:31.510 [2024-11-29T09:30:59.236Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:31.510 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:31.510 xnvme_bdev : 5.01 10272.55 40.13 0.00 0.00 6220.09 67.74 26214.40 00:13:31.510 [2024-11-29T09:30:59.236Z] =================================================================================================================== 00:13:31.510 [2024-11-29T09:30:59.236Z] Total : 10272.55 40.13 0.00 0.00 6220.09 67.74 26214.40 00:13:31.510 ************************************ 00:13:31.510 END TEST xnvme_bdevperf 00:13:31.510 ************************************ 00:13:31.510 00:13:31.510 real 0m11.130s 00:13:31.510 user 0m7.372s 00:13:31.510 sys 0m2.831s 00:13:31.510 09:30:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:31.510 09:30:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:31.510 09:30:59 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:31.510 09:30:59 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:31.510 09:30:59 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:31.510 09:30:59 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:31.510 ************************************ 00:13:31.510 START TEST xnvme_fio_plugin 00:13:31.510 ************************************ 00:13:31.510 09:30:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:31.510 09:30:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:31.510 09:30:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:31.510 09:30:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:31.510 09:30:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:31.510 09:30:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:31.510 09:30:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:31.510 09:30:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:31.510 09:30:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:31.510 09:30:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:31.510 09:30:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:31.510 09:30:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:31.510 09:30:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:31.510 09:30:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:31.510 09:30:59 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:31.510 09:30:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:31.510 09:30:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:31.510 09:30:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:31.510 09:30:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:31.510 09:30:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:31.510 09:30:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:31.510 09:30:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:31.510 09:30:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:31.510 09:30:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:31.772 { 00:13:31.772 "subsystems": [ 00:13:31.772 { 00:13:31.772 "subsystem": "bdev", 00:13:31.772 "config": [ 00:13:31.772 { 00:13:31.772 "params": { 00:13:31.772 "io_mechanism": "io_uring", 00:13:31.772 "conserve_cpu": true, 00:13:31.772 "filename": "/dev/nvme0n1", 00:13:31.772 "name": "xnvme_bdev" 00:13:31.772 }, 00:13:31.772 "method": "bdev_xnvme_create" 00:13:31.772 }, 00:13:31.772 { 00:13:31.772 "method": "bdev_wait_for_examine" 00:13:31.772 } 00:13:31.772 ] 00:13:31.772 } 00:13:31.772 ] 00:13:31.772 } 00:13:31.772 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:31.772 fio-3.35 00:13:31.772 Starting 1 thread 00:13:38.366 00:13:38.366 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=83773: Fri Nov 29 09:31:04 2024 00:13:38.366 read: IOPS=36.0k, BW=141MiB/s (148MB/s)(704MiB/5002msec) 00:13:38.366 slat (usec): min=2, max=103, avg= 3.94, stdev= 2.17 00:13:38.366 clat (usec): min=987, max=6891, avg=1614.69, stdev=250.07 00:13:38.366 lat (usec): min=990, max=6895, avg=1618.63, stdev=250.59 00:13:38.366 clat percentiles (usec): 00:13:38.366 | 1.00th=[ 1172], 5.00th=[ 1287], 10.00th=[ 1352], 20.00th=[ 1418], 00:13:38.366 | 30.00th=[ 1483], 40.00th=[ 1532], 50.00th=[ 1582], 60.00th=[ 1631], 00:13:38.366 | 70.00th=[ 1696], 80.00th=[ 1795], 90.00th=[ 1942], 95.00th=[ 2057], 00:13:38.366 | 99.00th=[ 2343], 99.50th=[ 2474], 99.90th=[ 3130], 99.95th=[ 3523], 00:13:38.366 | 99.99th=[ 4080] 00:13:38.366 bw ( KiB/s): min=141312, max=152576, per=100.00%, avg=144862.22, stdev=4320.60, samples=9 00:13:38.366 iops : min=35328, max=38144, avg=36215.56, stdev=1080.15, samples=9 00:13:38.366 lat (usec) : 1000=0.01% 00:13:38.366 lat (msec) : 2=92.87%, 4=7.12%, 10=0.01% 00:13:38.366 cpu : usr=41.49%, sys=54.03%, ctx=14, majf=0, minf=771 00:13:38.366 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:13:38.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:38.366 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:13:38.366 issued rwts: total=180297,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:38.366 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:38.366 00:13:38.366 Run status group 0 (all jobs): 00:13:38.366 READ: bw=141MiB/s (148MB/s), 141MiB/s-141MiB/s (148MB/s-148MB/s), io=704MiB (738MB), run=5002-5002msec 00:13:38.366 ----------------------------------------------------- 00:13:38.366 Suppressions used: 00:13:38.366 count bytes template 00:13:38.366 1 11 /usr/src/fio/parse.c 00:13:38.366 1 8 libtcmalloc_minimal.so 00:13:38.366 1 904 libcrypto.so 00:13:38.366 ----------------------------------------------------- 00:13:38.366 00:13:38.366 09:31:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:38.366 09:31:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:38.367 09:31:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:38.367 09:31:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:38.367 09:31:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:38.367 09:31:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:38.367 09:31:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:38.367 09:31:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:38.367 09:31:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:38.367 09:31:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:38.367 09:31:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:38.367 09:31:05 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:38.367 09:31:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:38.367 09:31:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:38.367 09:31:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:38.367 09:31:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:38.367 09:31:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:38.367 09:31:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:38.367 09:31:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:38.367 09:31:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:38.367 09:31:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:38.367 { 00:13:38.367 "subsystems": [ 00:13:38.367 { 00:13:38.367 "subsystem": "bdev", 00:13:38.367 "config": [ 00:13:38.367 { 00:13:38.367 "params": { 00:13:38.367 "io_mechanism": "io_uring", 00:13:38.367 "conserve_cpu": true, 00:13:38.367 "filename": "/dev/nvme0n1", 00:13:38.367 "name": "xnvme_bdev" 00:13:38.367 }, 00:13:38.367 "method": "bdev_xnvme_create" 00:13:38.367 }, 00:13:38.367 { 00:13:38.367 "method": "bdev_wait_for_examine" 00:13:38.367 } 00:13:38.367 ] 00:13:38.367 } 00:13:38.367 ] 00:13:38.367 } 00:13:38.367 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:38.367 fio-3.35 00:13:38.367 Starting 1 thread 00:13:43.657 00:13:43.657 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=83859: Fri Nov 29 09:31:10 2024 00:13:43.657 write: IOPS=34.7k, BW=136MiB/s (142MB/s)(680MiB/5010msec); 0 zone resets 00:13:43.657 slat (usec): min=2, max=548, avg= 4.11, stdev= 2.72 00:13:43.657 clat (usec): min=74, max=15441, avg=1677.41, stdev=589.28 00:13:43.657 lat (usec): min=77, max=15445, avg=1681.52, stdev=589.51 00:13:43.657 clat percentiles (usec): 00:13:43.657 | 1.00th=[ 1090], 5.00th=[ 1287], 10.00th=[ 1352], 20.00th=[ 1434], 00:13:43.657 | 30.00th=[ 1483], 40.00th=[ 1549], 50.00th=[ 1614], 60.00th=[ 1680], 00:13:43.657 | 70.00th=[ 1762], 80.00th=[ 1860], 90.00th=[ 2008], 95.00th=[ 2180], 00:13:43.657 | 99.00th=[ 2606], 99.50th=[ 3359], 99.90th=[10421], 99.95th=[11469], 00:13:43.657 | 99.99th=[13960] 00:13:43.657 bw ( KiB/s): min=121136, max=147840, per=100.00%, avg=139169.50, stdev=7749.68, samples=10 00:13:43.657 iops : min=30284, max=36960, avg=34792.30, stdev=1937.41, samples=10 00:13:43.657 lat (usec) : 100=0.01%, 250=0.02%, 500=0.05%, 750=0.18%, 1000=0.54% 00:13:43.657 lat (msec) : 2=89.08%, 4=9.67%, 10=0.31%, 20=0.14% 00:13:43.657 cpu : usr=50.01%, sys=45.34%, ctx=23, majf=0, minf=772 00:13:43.657 IO depths : 1=1.5%, 2=3.0%, 4=6.1%, 8=12.3%, 16=24.7%, 32=50.7%, >=64=1.7% 00:13:43.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:43.657 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:43.657 issued rwts: total=0,174060,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:43.657 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:43.657 00:13:43.657 Run status group 0 (all jobs): 00:13:43.657 WRITE: bw=136MiB/s (142MB/s), 136MiB/s-136MiB/s (142MB/s-142MB/s), io=680MiB (713MB), run=5010-5010msec 00:13:43.657 ----------------------------------------------------- 00:13:43.657 Suppressions used: 00:13:43.657 count bytes template 00:13:43.657 1 11 /usr/src/fio/parse.c 00:13:43.657 1 8 libtcmalloc_minimal.so 00:13:43.657 1 904 libcrypto.so 00:13:43.657 ----------------------------------------------------- 00:13:43.657 00:13:43.657 ************************************ 00:13:43.657 END TEST xnvme_fio_plugin 00:13:43.657 ************************************ 00:13:43.657 00:13:43.657 real 0m12.091s 00:13:43.657 user 0m5.769s 00:13:43.657 sys 0m5.554s 00:13:43.657 09:31:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:43.657 09:31:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:43.657 09:31:11 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:43.657 09:31:11 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:13:43.657 09:31:11 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:13:43.657 09:31:11 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:13:43.657 09:31:11 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:43.657 09:31:11 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:43.657 09:31:11 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:43.657 09:31:11 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:43.657 09:31:11 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:43.657 09:31:11 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:43.657 09:31:11 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:43.657 09:31:11 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:43.657 ************************************ 00:13:43.657 START TEST xnvme_rpc 00:13:43.657 ************************************ 00:13:43.657 09:31:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:43.658 09:31:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:43.658 09:31:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:43.658 09:31:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:43.658 09:31:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:43.658 09:31:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=83940 00:13:43.658 09:31:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 83940 00:13:43.658 09:31:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 83940 ']' 00:13:43.658 09:31:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.658 09:31:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:43.658 09:31:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.658 09:31:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:43.658 09:31:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:43.658 09:31:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:43.919 [2024-11-29 09:31:11.439607] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:13:43.919 [2024-11-29 09:31:11.439962] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83940 ] 00:13:43.919 [2024-11-29 09:31:11.577464] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:43.919 [2024-11-29 09:31:11.607867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.919 [2024-11-29 09:31:11.636255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.860 09:31:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:44.860 09:31:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:44.860 09:31:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:13:44.860 09:31:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.860 09:31:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.860 xnvme_bdev 00:13:44.860 09:31:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.860 09:31:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:44.860 09:31:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:44.860 09:31:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.860 09:31:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:44.860 09:31:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.860 09:31:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.860 09:31:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:44.860 09:31:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:44.861 09:31:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:44.861 09:31:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:44.861 09:31:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.861 09:31:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.861 09:31:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.861 09:31:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:13:44.861 09:31:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:44.861 09:31:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:44.861 09:31:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:44.861 09:31:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.861 09:31:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.861 09:31:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.861 09:31:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:13:44.861 09:31:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:44.861 09:31:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:44.861 09:31:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:44.861 09:31:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.861 09:31:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.861 09:31:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.861 09:31:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:13:44.861 09:31:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:44.861 09:31:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.861 09:31:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.861 09:31:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.861 09:31:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 83940 00:13:44.861 09:31:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 83940 ']' 00:13:44.861 09:31:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 83940 00:13:44.861 09:31:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:44.861 09:31:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:44.861 09:31:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83940 00:13:44.861 killing process with pid 83940 00:13:44.861 09:31:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:44.861 09:31:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:44.861 09:31:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83940' 00:13:44.861 09:31:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 83940 00:13:44.861 09:31:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 83940 00:13:45.122 ************************************ 00:13:45.122 00:13:45.122 real 0m1.458s 00:13:45.122 user 0m1.517s 00:13:45.122 sys 0m0.430s 00:13:45.122 09:31:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:45.122 09:31:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:45.122 END TEST xnvme_rpc 00:13:45.122 ************************************ 00:13:45.384 09:31:12 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:45.384 09:31:12 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:45.384 09:31:12 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:45.384 09:31:12 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:45.384 ************************************ 00:13:45.384 START TEST xnvme_bdevperf 00:13:45.384 ************************************ 00:13:45.384 09:31:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:45.384 09:31:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:45.384 09:31:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:13:45.384 09:31:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:45.384 09:31:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:45.384 09:31:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:45.384 09:31:12 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:45.384 09:31:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:45.384 { 00:13:45.384 "subsystems": [ 00:13:45.384 { 00:13:45.384 "subsystem": "bdev", 00:13:45.384 "config": [ 00:13:45.384 { 00:13:45.384 "params": { 00:13:45.384 "io_mechanism": "io_uring_cmd", 00:13:45.384 "conserve_cpu": false, 00:13:45.384 "filename": "/dev/ng0n1", 00:13:45.384 "name": "xnvme_bdev" 00:13:45.384 }, 00:13:45.384 "method": "bdev_xnvme_create" 00:13:45.384 }, 00:13:45.384 { 00:13:45.384 "method": "bdev_wait_for_examine" 00:13:45.384 } 00:13:45.384 ] 00:13:45.384 } 00:13:45.384 ] 00:13:45.384 } 00:13:45.384 [2024-11-29 09:31:12.955775] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:13:45.384 [2024-11-29 09:31:12.956133] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83992 ] 00:13:45.384 [2024-11-29 09:31:13.092021] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:45.645 [2024-11-29 09:31:13.119796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.645 [2024-11-29 09:31:13.148067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.645 Running I/O for 5 seconds... 00:13:47.978 34816.00 IOPS, 136.00 MiB/s [2024-11-29T09:31:16.278Z] 34816.00 IOPS, 136.00 MiB/s [2024-11-29T09:31:17.667Z] 34944.00 IOPS, 136.50 MiB/s [2024-11-29T09:31:18.612Z] 34990.00 IOPS, 136.68 MiB/s 00:13:50.886 Latency(us) 00:13:50.886 [2024-11-29T09:31:18.612Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:50.886 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:50.886 xnvme_bdev : 5.00 34690.80 135.51 0.00 0.00 1840.72 370.22 11947.72 00:13:50.886 [2024-11-29T09:31:18.612Z] =================================================================================================================== 00:13:50.886 [2024-11-29T09:31:18.612Z] Total : 34690.80 135.51 0.00 0.00 1840.72 370.22 11947.72 00:13:50.886 09:31:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:50.886 09:31:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:50.886 09:31:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:50.886 09:31:18 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:50.886 09:31:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:50.886 { 00:13:50.886 "subsystems": [ 00:13:50.886 { 00:13:50.886 "subsystem": "bdev", 00:13:50.886 "config": [ 00:13:50.886 { 00:13:50.886 "params": { 00:13:50.886 "io_mechanism": "io_uring_cmd", 00:13:50.886 "conserve_cpu": false, 00:13:50.886 "filename": "/dev/ng0n1", 00:13:50.886 "name": "xnvme_bdev" 00:13:50.886 }, 00:13:50.886 "method": "bdev_xnvme_create" 00:13:50.886 }, 00:13:50.886 { 00:13:50.886 "method": "bdev_wait_for_examine" 00:13:50.886 } 00:13:50.886 ] 00:13:50.886 } 00:13:50.886 ] 00:13:50.886 } 00:13:50.886 [2024-11-29 09:31:18.509459] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:13:50.886 [2024-11-29 09:31:18.509812] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84061 ] 00:13:51.147 [2024-11-29 09:31:18.645757] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:51.147 [2024-11-29 09:31:18.675265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.147 [2024-11-29 09:31:18.704188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.147 Running I/O for 5 seconds... 00:13:53.476 15634.00 IOPS, 61.07 MiB/s [2024-11-29T09:31:22.147Z] 15055.50 IOPS, 58.81 MiB/s [2024-11-29T09:31:23.090Z] 14802.67 IOPS, 57.82 MiB/s [2024-11-29T09:31:24.034Z] 14840.00 IOPS, 57.97 MiB/s [2024-11-29T09:31:24.034Z] 14990.00 IOPS, 58.55 MiB/s 00:13:56.308 Latency(us) 00:13:56.308 [2024-11-29T09:31:24.034Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:56.308 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:56.308 xnvme_bdev : 5.01 14979.64 58.51 0.00 0.00 4264.61 64.98 27222.65 00:13:56.308 [2024-11-29T09:31:24.034Z] =================================================================================================================== 00:13:56.308 [2024-11-29T09:31:24.034Z] Total : 14979.64 58.51 0.00 0.00 4264.61 64.98 27222.65 00:13:56.308 09:31:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:56.308 09:31:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:13:56.308 09:31:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:56.308 09:31:23 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:56.308 09:31:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:56.569 { 00:13:56.569 "subsystems": [ 00:13:56.569 { 00:13:56.569 "subsystem": "bdev", 00:13:56.569 "config": [ 00:13:56.569 { 00:13:56.569 "params": { 00:13:56.569 "io_mechanism": "io_uring_cmd", 00:13:56.569 "conserve_cpu": false, 00:13:56.569 "filename": "/dev/ng0n1", 00:13:56.569 "name": "xnvme_bdev" 00:13:56.569 }, 00:13:56.569 "method": "bdev_xnvme_create" 00:13:56.569 }, 00:13:56.569 { 00:13:56.569 "method": "bdev_wait_for_examine" 00:13:56.569 } 00:13:56.569 ] 00:13:56.569 } 00:13:56.569 ] 00:13:56.569 } 00:13:56.569 [2024-11-29 09:31:24.077879] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:13:56.569 [2024-11-29 09:31:24.078007] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84124 ] 00:13:56.569 [2024-11-29 09:31:24.215095] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:56.569 [2024-11-29 09:31:24.242233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.569 [2024-11-29 09:31:24.272388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.829 Running I/O for 5 seconds... 00:13:58.715 72704.00 IOPS, 284.00 MiB/s [2024-11-29T09:31:27.386Z] 72160.00 IOPS, 281.88 MiB/s [2024-11-29T09:31:28.813Z] 74112.00 IOPS, 289.50 MiB/s [2024-11-29T09:31:29.383Z] 76256.00 IOPS, 297.88 MiB/s 00:14:01.657 Latency(us) 00:14:01.657 [2024-11-29T09:31:29.383Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:01.657 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:14:01.657 xnvme_bdev : 5.00 79428.63 310.27 0.00 0.00 802.42 482.07 2772.68 00:14:01.657 [2024-11-29T09:31:29.383Z] =================================================================================================================== 00:14:01.657 [2024-11-29T09:31:29.383Z] Total : 79428.63 310.27 0.00 0.00 802.42 482.07 2772.68 00:14:01.916 09:31:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:01.916 09:31:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:14:01.916 09:31:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:01.916 09:31:29 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:01.916 09:31:29 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:01.916 { 00:14:01.916 "subsystems": [ 00:14:01.916 { 00:14:01.916 "subsystem": "bdev", 00:14:01.916 "config": [ 00:14:01.916 { 00:14:01.916 "params": { 00:14:01.916 "io_mechanism": "io_uring_cmd", 00:14:01.916 "conserve_cpu": false, 00:14:01.916 "filename": "/dev/ng0n1", 00:14:01.916 "name": "xnvme_bdev" 00:14:01.916 }, 00:14:01.916 "method": "bdev_xnvme_create" 00:14:01.916 }, 00:14:01.916 { 00:14:01.916 "method": "bdev_wait_for_examine" 00:14:01.916 } 00:14:01.916 ] 00:14:01.916 } 00:14:01.916 ] 00:14:01.916 } 00:14:01.916 [2024-11-29 09:31:29.569644] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:14:01.916 [2024-11-29 09:31:29.569753] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84194 ] 00:14:02.175 [2024-11-29 09:31:29.704366] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:02.175 [2024-11-29 09:31:29.730614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.175 [2024-11-29 09:31:29.759663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.175 Running I/O for 5 seconds... 00:14:04.481 46405.00 IOPS, 181.27 MiB/s [2024-11-29T09:31:33.147Z] 46937.00 IOPS, 183.35 MiB/s [2024-11-29T09:31:34.088Z] 44187.00 IOPS, 172.61 MiB/s [2024-11-29T09:31:35.030Z] 42384.75 IOPS, 165.57 MiB/s [2024-11-29T09:31:35.030Z] 41410.00 IOPS, 161.76 MiB/s 00:14:07.304 Latency(us) 00:14:07.304 [2024-11-29T09:31:35.030Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:07.304 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:14:07.304 xnvme_bdev : 5.00 41393.20 161.69 0.00 0.00 1542.05 158.33 20164.92 00:14:07.304 [2024-11-29T09:31:35.030Z] =================================================================================================================== 00:14:07.304 [2024-11-29T09:31:35.030Z] Total : 41393.20 161.69 0.00 0.00 1542.05 158.33 20164.92 00:14:07.564 ************************************ 00:14:07.564 END TEST xnvme_bdevperf 00:14:07.564 ************************************ 00:14:07.564 00:14:07.564 real 0m22.256s 00:14:07.564 user 0m10.283s 00:14:07.564 sys 0m11.489s 00:14:07.564 09:31:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:07.564 09:31:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:07.564 09:31:35 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:07.564 09:31:35 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:07.564 09:31:35 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:07.564 09:31:35 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:07.564 ************************************ 00:14:07.564 START TEST xnvme_fio_plugin 00:14:07.564 ************************************ 00:14:07.564 09:31:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:07.564 09:31:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:07.564 09:31:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:14:07.564 09:31:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:07.564 09:31:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:07.564 09:31:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:07.564 09:31:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:07.564 09:31:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:07.564 09:31:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:07.564 09:31:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:07.564 09:31:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:07.564 09:31:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:07.564 09:31:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:07.564 09:31:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:07.564 09:31:35 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:07.564 09:31:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:07.564 09:31:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:07.564 09:31:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:07.564 09:31:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:07.564 09:31:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:07.564 09:31:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:07.564 09:31:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:07.564 09:31:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:07.564 09:31:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:07.564 { 00:14:07.564 "subsystems": [ 00:14:07.564 { 00:14:07.564 "subsystem": "bdev", 00:14:07.564 "config": [ 00:14:07.564 { 00:14:07.564 "params": { 00:14:07.564 "io_mechanism": "io_uring_cmd", 00:14:07.564 "conserve_cpu": false, 00:14:07.564 "filename": "/dev/ng0n1", 00:14:07.564 "name": "xnvme_bdev" 00:14:07.564 }, 00:14:07.564 "method": "bdev_xnvme_create" 00:14:07.564 }, 00:14:07.564 { 00:14:07.564 "method": "bdev_wait_for_examine" 00:14:07.564 } 00:14:07.564 ] 00:14:07.564 } 00:14:07.564 ] 00:14:07.564 } 00:14:07.824 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:07.824 fio-3.35 00:14:07.824 Starting 1 thread 00:14:14.412 00:14:14.412 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=84301: Fri Nov 29 09:31:40 2024 00:14:14.412 read: IOPS=41.9k, BW=164MiB/s (172MB/s)(818MiB/5001msec) 00:14:14.412 slat (usec): min=2, max=251, avg= 3.21, stdev= 1.98 00:14:14.412 clat (usec): min=870, max=3824, avg=1401.38, stdev=277.77 00:14:14.412 lat (usec): min=873, max=3863, avg=1404.59, stdev=278.10 00:14:14.412 clat percentiles (usec): 00:14:14.413 | 1.00th=[ 996], 5.00th=[ 1074], 10.00th=[ 1123], 20.00th=[ 1172], 00:14:14.413 | 30.00th=[ 1221], 40.00th=[ 1270], 50.00th=[ 1336], 60.00th=[ 1401], 00:14:14.413 | 70.00th=[ 1500], 80.00th=[ 1614], 90.00th=[ 1778], 95.00th=[ 1926], 00:14:14.413 | 99.00th=[ 2245], 99.50th=[ 2376], 99.90th=[ 2802], 99.95th=[ 3261], 00:14:14.413 | 99.99th=[ 3621] 00:14:14.413 bw ( KiB/s): min=138752, max=187904, per=100.00%, avg=168508.11, stdev=16012.46, samples=9 00:14:14.413 iops : min=34688, max=46976, avg=42127.00, stdev=4003.09, samples=9 00:14:14.413 lat (usec) : 1000=1.02% 00:14:14.413 lat (msec) : 2=95.52%, 4=3.47% 00:14:14.413 cpu : usr=41.58%, sys=57.02%, ctx=32, majf=0, minf=771 00:14:14.413 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:14.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:14.413 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:14.413 issued rwts: total=209503,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:14.413 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:14.413 00:14:14.413 Run status group 0 (all jobs): 00:14:14.413 READ: bw=164MiB/s (172MB/s), 164MiB/s-164MiB/s (172MB/s-172MB/s), io=818MiB (858MB), run=5001-5001msec 00:14:14.413 ----------------------------------------------------- 00:14:14.413 Suppressions used: 00:14:14.413 count bytes template 00:14:14.413 1 11 /usr/src/fio/parse.c 00:14:14.413 1 8 libtcmalloc_minimal.so 00:14:14.413 1 904 libcrypto.so 00:14:14.413 ----------------------------------------------------- 00:14:14.413 00:14:14.413 09:31:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:14.413 09:31:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:14.413 09:31:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:14.413 09:31:41 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:14.413 09:31:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:14.413 09:31:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:14.413 09:31:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:14.413 09:31:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:14.413 09:31:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:14.413 09:31:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:14.413 09:31:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:14.413 09:31:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:14.413 09:31:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:14.413 09:31:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:14.413 09:31:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:14.413 09:31:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:14.413 09:31:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:14.413 09:31:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:14.413 09:31:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:14.413 09:31:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:14.413 09:31:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:14.413 { 00:14:14.413 "subsystems": [ 00:14:14.413 { 00:14:14.413 "subsystem": "bdev", 00:14:14.413 "config": [ 00:14:14.413 { 00:14:14.413 "params": { 00:14:14.413 "io_mechanism": "io_uring_cmd", 00:14:14.413 "conserve_cpu": false, 00:14:14.413 "filename": "/dev/ng0n1", 00:14:14.413 "name": "xnvme_bdev" 00:14:14.413 }, 00:14:14.413 "method": "bdev_xnvme_create" 00:14:14.413 }, 00:14:14.413 { 00:14:14.413 "method": "bdev_wait_for_examine" 00:14:14.413 } 00:14:14.413 ] 00:14:14.413 } 00:14:14.413 ] 00:14:14.413 } 00:14:14.413 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:14.413 fio-3.35 00:14:14.413 Starting 1 thread 00:14:19.705 00:14:19.705 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=84386: Fri Nov 29 09:31:47 2024 00:14:19.705 write: IOPS=40.1k, BW=156MiB/s (164MB/s)(783MiB/5003msec); 0 zone resets 00:14:19.705 slat (nsec): min=2919, max=82237, avg=3847.88, stdev=1941.15 00:14:19.705 clat (usec): min=150, max=7366, avg=1451.90, stdev=315.69 00:14:19.705 lat (usec): min=153, max=7382, avg=1455.75, stdev=316.11 00:14:19.705 clat percentiles (usec): 00:14:19.705 | 1.00th=[ 832], 5.00th=[ 1045], 10.00th=[ 1106], 20.00th=[ 1205], 00:14:19.705 | 30.00th=[ 1270], 40.00th=[ 1336], 50.00th=[ 1418], 60.00th=[ 1500], 00:14:19.705 | 70.00th=[ 1582], 80.00th=[ 1696], 90.00th=[ 1844], 95.00th=[ 1975], 00:14:19.705 | 99.00th=[ 2311], 99.50th=[ 2540], 99.90th=[ 3392], 99.95th=[ 3621], 00:14:19.705 | 99.99th=[ 5407] 00:14:19.705 bw ( KiB/s): min=140448, max=181320, per=100.00%, avg=162505.78, stdev=15502.32, samples=9 00:14:19.705 iops : min=35112, max=45330, avg=40626.44, stdev=3875.58, samples=9 00:14:19.705 lat (usec) : 250=0.01%, 500=0.21%, 750=0.40%, 1000=2.77% 00:14:19.705 lat (msec) : 2=92.16%, 4=4.43%, 10=0.03% 00:14:19.705 cpu : usr=38.40%, sys=60.40%, ctx=10, majf=0, minf=772 00:14:19.705 IO depths : 1=1.4%, 2=2.8%, 4=5.6%, 8=11.4%, 16=23.2%, 32=53.9%, >=64=1.8% 00:14:19.705 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:19.705 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.5%, >=64=0.0% 00:14:19.705 issued rwts: total=0,200414,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:19.705 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:19.705 00:14:19.705 Run status group 0 (all jobs): 00:14:19.705 WRITE: bw=156MiB/s (164MB/s), 156MiB/s-156MiB/s (164MB/s-164MB/s), io=783MiB (821MB), run=5003-5003msec 00:14:19.966 ----------------------------------------------------- 00:14:19.966 Suppressions used: 00:14:19.966 count bytes template 00:14:19.966 1 11 /usr/src/fio/parse.c 00:14:19.966 1 8 libtcmalloc_minimal.so 00:14:19.966 1 904 libcrypto.so 00:14:19.966 ----------------------------------------------------- 00:14:19.966 00:14:19.966 00:14:19.966 real 0m12.315s 00:14:19.966 user 0m5.312s 00:14:19.966 sys 0m6.545s 00:14:19.966 ************************************ 00:14:19.966 END TEST xnvme_fio_plugin 00:14:19.966 ************************************ 00:14:19.966 09:31:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:19.966 09:31:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:19.966 09:31:47 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:19.966 09:31:47 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:14:19.966 09:31:47 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:14:19.966 09:31:47 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:19.966 09:31:47 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:19.966 09:31:47 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:19.966 09:31:47 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:19.966 ************************************ 00:14:19.966 START TEST xnvme_rpc 00:14:19.966 ************************************ 00:14:19.966 09:31:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:19.966 09:31:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:19.966 09:31:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:19.966 09:31:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:19.966 09:31:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:19.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:19.966 09:31:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=84460 00:14:19.966 09:31:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 84460 00:14:19.966 09:31:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 84460 ']' 00:14:19.966 09:31:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:19.966 09:31:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:19.966 09:31:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:19.966 09:31:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:19.966 09:31:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.966 09:31:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:20.226 [2024-11-29 09:31:47.697024] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:14:20.226 [2024-11-29 09:31:47.697526] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84460 ] 00:14:20.226 [2024-11-29 09:31:47.838185] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:20.226 [2024-11-29 09:31:47.868986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.226 [2024-11-29 09:31:47.910555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.170 09:31:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:21.170 09:31:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:21.170 09:31:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:14:21.170 09:31:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.170 09:31:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.170 xnvme_bdev 00:14:21.170 09:31:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.170 09:31:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:21.170 09:31:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:21.170 09:31:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.170 09:31:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.170 09:31:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:21.170 09:31:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.170 09:31:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:21.170 09:31:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:21.170 09:31:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:21.170 09:31:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.170 09:31:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.170 09:31:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:21.170 09:31:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.170 09:31:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:14:21.170 09:31:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:21.170 09:31:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:21.170 09:31:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.170 09:31:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.170 09:31:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:21.170 09:31:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.170 09:31:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:14:21.170 09:31:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:21.170 09:31:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:21.170 09:31:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:21.170 09:31:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.170 09:31:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.170 09:31:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.170 09:31:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:14:21.170 09:31:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:21.170 09:31:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.170 09:31:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.170 09:31:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.170 09:31:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 84460 00:14:21.170 09:31:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 84460 ']' 00:14:21.170 09:31:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 84460 00:14:21.170 09:31:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:21.170 09:31:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:21.170 09:31:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84460 00:14:21.170 killing process with pid 84460 00:14:21.170 09:31:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:21.170 09:31:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:21.170 09:31:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84460' 00:14:21.170 09:31:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 84460 00:14:21.170 09:31:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 84460 00:14:21.744 ************************************ 00:14:21.744 END TEST xnvme_rpc 00:14:21.744 ************************************ 00:14:21.744 00:14:21.744 real 0m1.619s 00:14:21.744 user 0m1.586s 00:14:21.744 sys 0m0.534s 00:14:21.744 09:31:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:21.744 09:31:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.744 09:31:49 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:21.744 09:31:49 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:21.744 09:31:49 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:21.744 09:31:49 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:21.744 ************************************ 00:14:21.744 START TEST xnvme_bdevperf 00:14:21.744 ************************************ 00:14:21.744 09:31:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:21.744 09:31:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:21.744 09:31:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:14:21.744 09:31:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:21.744 09:31:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:21.744 09:31:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:21.744 09:31:49 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:21.744 09:31:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:21.744 { 00:14:21.744 "subsystems": [ 00:14:21.744 { 00:14:21.744 "subsystem": "bdev", 00:14:21.744 "config": [ 00:14:21.744 { 00:14:21.744 "params": { 00:14:21.744 "io_mechanism": "io_uring_cmd", 00:14:21.744 "conserve_cpu": true, 00:14:21.744 "filename": "/dev/ng0n1", 00:14:21.744 "name": "xnvme_bdev" 00:14:21.744 }, 00:14:21.744 "method": "bdev_xnvme_create" 00:14:21.744 }, 00:14:21.744 { 00:14:21.744 "method": "bdev_wait_for_examine" 00:14:21.744 } 00:14:21.744 ] 00:14:21.744 } 00:14:21.744 ] 00:14:21.744 } 00:14:21.744 [2024-11-29 09:31:49.365719] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:14:21.744 [2024-11-29 09:31:49.365855] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84518 ] 00:14:22.006 [2024-11-29 09:31:49.503051] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:22.006 [2024-11-29 09:31:49.533249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.006 [2024-11-29 09:31:49.573532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.006 Running I/O for 5 seconds... 00:14:24.338 40544.00 IOPS, 158.38 MiB/s [2024-11-29T09:31:53.008Z] 39984.00 IOPS, 156.19 MiB/s [2024-11-29T09:31:53.952Z] 39948.33 IOPS, 156.05 MiB/s [2024-11-29T09:31:54.896Z] 40462.75 IOPS, 158.06 MiB/s [2024-11-29T09:31:54.896Z] 40271.60 IOPS, 157.31 MiB/s 00:14:27.170 Latency(us) 00:14:27.170 [2024-11-29T09:31:54.896Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:27.170 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:27.170 xnvme_bdev : 5.00 40268.39 157.30 0.00 0.00 1586.07 557.69 6956.90 00:14:27.170 [2024-11-29T09:31:54.896Z] =================================================================================================================== 00:14:27.170 [2024-11-29T09:31:54.896Z] Total : 40268.39 157.30 0.00 0.00 1586.07 557.69 6956.90 00:14:27.431 09:31:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:27.431 09:31:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:27.431 09:31:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:27.431 09:31:54 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:27.431 09:31:54 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:27.431 { 00:14:27.431 "subsystems": [ 00:14:27.431 { 00:14:27.431 "subsystem": "bdev", 00:14:27.431 "config": [ 00:14:27.431 { 00:14:27.431 "params": { 00:14:27.431 "io_mechanism": "io_uring_cmd", 00:14:27.431 "conserve_cpu": true, 00:14:27.431 "filename": "/dev/ng0n1", 00:14:27.431 "name": "xnvme_bdev" 00:14:27.431 }, 00:14:27.431 "method": "bdev_xnvme_create" 00:14:27.431 }, 00:14:27.431 { 00:14:27.431 "method": "bdev_wait_for_examine" 00:14:27.431 } 00:14:27.431 ] 00:14:27.431 } 00:14:27.431 ] 00:14:27.431 } 00:14:27.431 [2024-11-29 09:31:55.063711] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:14:27.431 [2024-11-29 09:31:55.063847] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84581 ] 00:14:27.691 [2024-11-29 09:31:55.200748] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:27.691 [2024-11-29 09:31:55.229452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.691 [2024-11-29 09:31:55.255015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.691 Running I/O for 5 seconds... 00:14:30.018 45341.00 IOPS, 177.11 MiB/s [2024-11-29T09:31:58.688Z] 43976.00 IOPS, 171.78 MiB/s [2024-11-29T09:31:59.632Z] 43752.33 IOPS, 170.91 MiB/s [2024-11-29T09:32:00.576Z] 43596.25 IOPS, 170.30 MiB/s 00:14:32.850 Latency(us) 00:14:32.850 [2024-11-29T09:32:00.576Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:32.850 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:32.850 xnvme_bdev : 5.00 43366.73 169.40 0.00 0.00 1471.73 302.47 7965.14 00:14:32.850 [2024-11-29T09:32:00.576Z] =================================================================================================================== 00:14:32.850 [2024-11-29T09:32:00.576Z] Total : 43366.73 169.40 0.00 0.00 1471.73 302.47 7965.14 00:14:33.111 09:32:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:33.111 09:32:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:33.111 09:32:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:33.111 09:32:00 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:33.111 09:32:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:33.111 { 00:14:33.111 "subsystems": [ 00:14:33.111 { 00:14:33.111 "subsystem": "bdev", 00:14:33.111 "config": [ 00:14:33.111 { 00:14:33.111 "params": { 00:14:33.111 "io_mechanism": "io_uring_cmd", 00:14:33.111 "conserve_cpu": true, 00:14:33.111 "filename": "/dev/ng0n1", 00:14:33.111 "name": "xnvme_bdev" 00:14:33.111 }, 00:14:33.111 "method": "bdev_xnvme_create" 00:14:33.111 }, 00:14:33.111 { 00:14:33.111 "method": "bdev_wait_for_examine" 00:14:33.111 } 00:14:33.111 ] 00:14:33.111 } 00:14:33.111 ] 00:14:33.111 } 00:14:33.111 [2024-11-29 09:32:00.698078] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:14:33.111 [2024-11-29 09:32:00.698331] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84654 ] 00:14:33.111 [2024-11-29 09:32:00.831325] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:33.372 [2024-11-29 09:32:00.860873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.372 [2024-11-29 09:32:00.889260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.372 Running I/O for 5 seconds... 00:14:35.701 72256.00 IOPS, 282.25 MiB/s [2024-11-29T09:32:04.371Z] 74944.00 IOPS, 292.75 MiB/s [2024-11-29T09:32:05.315Z] 79402.67 IOPS, 310.17 MiB/s [2024-11-29T09:32:06.423Z] 79616.00 IOPS, 311.00 MiB/s [2024-11-29T09:32:06.423Z] 79347.20 IOPS, 309.95 MiB/s 00:14:38.697 Latency(us) 00:14:38.697 [2024-11-29T09:32:06.423Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:38.697 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:14:38.697 xnvme_bdev : 5.00 79322.88 309.85 0.00 0.00 803.34 395.42 2797.88 00:14:38.697 [2024-11-29T09:32:06.423Z] =================================================================================================================== 00:14:38.697 [2024-11-29T09:32:06.423Z] Total : 79322.88 309.85 0.00 0.00 803.34 395.42 2797.88 00:14:38.697 09:32:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:38.697 09:32:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:14:38.697 09:32:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:38.697 09:32:06 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:38.697 09:32:06 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:38.697 { 00:14:38.697 "subsystems": [ 00:14:38.697 { 00:14:38.697 "subsystem": "bdev", 00:14:38.697 "config": [ 00:14:38.697 { 00:14:38.697 "params": { 00:14:38.697 "io_mechanism": "io_uring_cmd", 00:14:38.697 "conserve_cpu": true, 00:14:38.697 "filename": "/dev/ng0n1", 00:14:38.697 "name": "xnvme_bdev" 00:14:38.697 }, 00:14:38.697 "method": "bdev_xnvme_create" 00:14:38.697 }, 00:14:38.697 { 00:14:38.697 "method": "bdev_wait_for_examine" 00:14:38.697 } 00:14:38.697 ] 00:14:38.697 } 00:14:38.697 ] 00:14:38.697 } 00:14:38.697 [2024-11-29 09:32:06.288955] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:14:38.697 [2024-11-29 09:32:06.289087] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84718 ] 00:14:38.957 [2024-11-29 09:32:06.425689] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:38.957 [2024-11-29 09:32:06.455998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.957 [2024-11-29 09:32:06.486965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.957 Running I/O for 5 seconds... 00:14:41.283 41061.00 IOPS, 160.39 MiB/s [2024-11-29T09:32:09.948Z] 40032.00 IOPS, 156.38 MiB/s [2024-11-29T09:32:10.887Z] 39715.33 IOPS, 155.14 MiB/s [2024-11-29T09:32:11.828Z] 40071.25 IOPS, 156.53 MiB/s [2024-11-29T09:32:11.828Z] 40815.00 IOPS, 159.43 MiB/s 00:14:44.102 Latency(us) 00:14:44.102 [2024-11-29T09:32:11.828Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.102 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:14:44.102 xnvme_bdev : 5.00 40796.58 159.36 0.00 0.00 1563.39 118.15 25105.33 00:14:44.102 [2024-11-29T09:32:11.828Z] =================================================================================================================== 00:14:44.102 [2024-11-29T09:32:11.828Z] Total : 40796.58 159.36 0.00 0.00 1563.39 118.15 25105.33 00:14:44.363 00:14:44.363 real 0m22.572s 00:14:44.363 user 0m16.068s 00:14:44.363 sys 0m4.186s 00:14:44.363 09:32:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:44.363 09:32:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:44.363 ************************************ 00:14:44.363 END TEST xnvme_bdevperf 00:14:44.363 ************************************ 00:14:44.363 09:32:11 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:44.363 09:32:11 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:44.363 09:32:11 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:44.363 09:32:11 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:44.363 ************************************ 00:14:44.363 START TEST xnvme_fio_plugin 00:14:44.363 ************************************ 00:14:44.363 09:32:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:44.363 09:32:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:44.363 09:32:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:14:44.363 09:32:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:44.363 09:32:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:44.363 09:32:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:44.363 09:32:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:44.363 09:32:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:44.363 09:32:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:44.363 09:32:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:44.363 09:32:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:44.363 09:32:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:44.363 09:32:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:44.363 09:32:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:44.363 09:32:11 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:44.363 09:32:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:44.363 09:32:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:44.363 09:32:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:44.363 09:32:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:44.363 09:32:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:44.363 09:32:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:44.363 09:32:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:44.363 09:32:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:44.363 09:32:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:44.363 { 00:14:44.363 "subsystems": [ 00:14:44.363 { 00:14:44.363 "subsystem": "bdev", 00:14:44.363 "config": [ 00:14:44.363 { 00:14:44.363 "params": { 00:14:44.363 "io_mechanism": "io_uring_cmd", 00:14:44.363 "conserve_cpu": true, 00:14:44.363 "filename": "/dev/ng0n1", 00:14:44.363 "name": "xnvme_bdev" 00:14:44.363 }, 00:14:44.363 "method": "bdev_xnvme_create" 00:14:44.363 }, 00:14:44.363 { 00:14:44.363 "method": "bdev_wait_for_examine" 00:14:44.363 } 00:14:44.363 ] 00:14:44.363 } 00:14:44.363 ] 00:14:44.363 } 00:14:44.624 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:44.624 fio-3.35 00:14:44.624 Starting 1 thread 00:14:49.919 00:14:49.919 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=84820: Fri Nov 29 09:32:17 2024 00:14:49.919 read: IOPS=35.5k, BW=139MiB/s (145MB/s)(694MiB/5001msec) 00:14:49.919 slat (nsec): min=2873, max=67609, avg=3666.91, stdev=1971.38 00:14:49.919 clat (usec): min=99, max=3256, avg=1652.58, stdev=264.15 00:14:49.919 lat (usec): min=101, max=3265, avg=1656.25, stdev=264.45 00:14:49.919 clat percentiles (usec): 00:14:49.919 | 1.00th=[ 1172], 5.00th=[ 1287], 10.00th=[ 1352], 20.00th=[ 1434], 00:14:49.919 | 30.00th=[ 1500], 40.00th=[ 1549], 50.00th=[ 1614], 60.00th=[ 1680], 00:14:49.919 | 70.00th=[ 1762], 80.00th=[ 1860], 90.00th=[ 2008], 95.00th=[ 2147], 00:14:49.919 | 99.00th=[ 2409], 99.50th=[ 2507], 99.90th=[ 2737], 99.95th=[ 2835], 00:14:49.919 | 99.99th=[ 3097] 00:14:49.919 bw ( KiB/s): min=136704, max=146944, per=100.00%, avg=142417.78, stdev=2934.03, samples=9 00:14:49.919 iops : min=34176, max=36736, avg=35604.44, stdev=733.51, samples=9 00:14:49.919 lat (usec) : 100=0.01%, 250=0.01%, 1000=0.04% 00:14:49.919 lat (msec) : 2=89.50%, 4=10.45% 00:14:49.919 cpu : usr=55.78%, sys=40.94%, ctx=12, majf=0, minf=771 00:14:49.919 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:49.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:49.919 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:49.919 issued rwts: total=177603,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:49.919 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:49.919 00:14:49.919 Run status group 0 (all jobs): 00:14:49.919 READ: bw=139MiB/s (145MB/s), 139MiB/s-139MiB/s (145MB/s-145MB/s), io=694MiB (727MB), run=5001-5001msec 00:14:50.492 ----------------------------------------------------- 00:14:50.492 Suppressions used: 00:14:50.492 count bytes template 00:14:50.492 1 11 /usr/src/fio/parse.c 00:14:50.492 1 8 libtcmalloc_minimal.so 00:14:50.492 1 904 libcrypto.so 00:14:50.492 ----------------------------------------------------- 00:14:50.492 00:14:50.492 09:32:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:50.492 09:32:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:50.492 09:32:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:50.492 09:32:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:50.492 09:32:18 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:50.492 09:32:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:50.492 09:32:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:50.492 09:32:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:50.492 09:32:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:50.493 09:32:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:50.493 09:32:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:50.493 09:32:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:50.493 09:32:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:50.493 09:32:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:50.493 09:32:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:50.493 09:32:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:50.493 09:32:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:50.493 09:32:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:50.493 09:32:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:50.493 09:32:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:50.493 09:32:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:50.493 { 00:14:50.493 "subsystems": [ 00:14:50.493 { 00:14:50.493 "subsystem": "bdev", 00:14:50.493 "config": [ 00:14:50.493 { 00:14:50.493 "params": { 00:14:50.493 "io_mechanism": "io_uring_cmd", 00:14:50.493 "conserve_cpu": true, 00:14:50.493 "filename": "/dev/ng0n1", 00:14:50.493 "name": "xnvme_bdev" 00:14:50.493 }, 00:14:50.493 "method": "bdev_xnvme_create" 00:14:50.493 }, 00:14:50.493 { 00:14:50.493 "method": "bdev_wait_for_examine" 00:14:50.493 } 00:14:50.493 ] 00:14:50.493 } 00:14:50.493 ] 00:14:50.493 } 00:14:50.755 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:50.755 fio-3.35 00:14:50.755 Starting 1 thread 00:14:56.059 00:14:56.059 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=84906: Fri Nov 29 09:32:23 2024 00:14:56.059 write: IOPS=36.8k, BW=144MiB/s (151MB/s)(719MiB/5004msec); 0 zone resets 00:14:56.059 slat (usec): min=2, max=367, avg= 4.18, stdev= 2.62 00:14:56.059 clat (usec): min=814, max=7189, avg=1568.49, stdev=275.89 00:14:56.059 lat (usec): min=818, max=7224, avg=1572.68, stdev=276.48 00:14:56.059 clat percentiles (usec): 00:14:56.059 | 1.00th=[ 1106], 5.00th=[ 1221], 10.00th=[ 1287], 20.00th=[ 1369], 00:14:56.059 | 30.00th=[ 1418], 40.00th=[ 1483], 50.00th=[ 1532], 60.00th=[ 1598], 00:14:56.059 | 70.00th=[ 1663], 80.00th=[ 1745], 90.00th=[ 1893], 95.00th=[ 2024], 00:14:56.059 | 99.00th=[ 2311], 99.50th=[ 2507], 99.90th=[ 3392], 99.95th=[ 4359], 00:14:56.059 | 99.99th=[ 6980] 00:14:56.059 bw ( KiB/s): min=142760, max=151616, per=99.98%, avg=147205.33, stdev=3555.78, samples=9 00:14:56.059 iops : min=35690, max=37904, avg=36801.33, stdev=888.94, samples=9 00:14:56.059 lat (usec) : 1000=0.10% 00:14:56.059 lat (msec) : 2=94.36%, 4=5.49%, 10=0.05% 00:14:56.059 cpu : usr=48.75%, sys=46.95%, ctx=20, majf=0, minf=772 00:14:56.060 IO depths : 1=1.5%, 2=3.0%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.2%, >=64=1.6% 00:14:56.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:56.060 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:56.060 issued rwts: total=0,184184,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:56.060 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:56.060 00:14:56.060 Run status group 0 (all jobs): 00:14:56.060 WRITE: bw=144MiB/s (151MB/s), 144MiB/s-144MiB/s (151MB/s-151MB/s), io=719MiB (754MB), run=5004-5004msec 00:14:56.631 ----------------------------------------------------- 00:14:56.631 Suppressions used: 00:14:56.631 count bytes template 00:14:56.631 1 11 /usr/src/fio/parse.c 00:14:56.631 1 8 libtcmalloc_minimal.so 00:14:56.631 1 904 libcrypto.so 00:14:56.631 ----------------------------------------------------- 00:14:56.631 00:14:56.631 ************************************ 00:14:56.631 END TEST xnvme_fio_plugin 00:14:56.631 ************************************ 00:14:56.631 00:14:56.631 real 0m12.244s 00:14:56.631 user 0m6.488s 00:14:56.631 sys 0m5.068s 00:14:56.631 09:32:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:56.631 09:32:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:56.631 Process with pid 84460 is not found 00:14:56.631 09:32:24 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 84460 00:14:56.631 09:32:24 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 84460 ']' 00:14:56.631 09:32:24 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 84460 00:14:56.631 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (84460) - No such process 00:14:56.631 09:32:24 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 84460 is not found' 00:14:56.631 ************************************ 00:14:56.631 END TEST nvme_xnvme 00:14:56.631 ************************************ 00:14:56.631 00:14:56.631 real 2m59.285s 00:14:56.631 user 1m30.628s 00:14:56.631 sys 1m13.536s 00:14:56.631 09:32:24 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:56.631 09:32:24 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:56.631 09:32:24 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:14:56.631 09:32:24 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:56.631 09:32:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:56.631 09:32:24 -- common/autotest_common.sh@10 -- # set +x 00:14:56.631 ************************************ 00:14:56.631 START TEST blockdev_xnvme 00:14:56.631 ************************************ 00:14:56.631 09:32:24 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:14:56.893 * Looking for test storage... 00:14:56.893 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:14:56.893 09:32:24 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:56.893 09:32:24 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:14:56.893 09:32:24 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:56.893 09:32:24 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:56.893 09:32:24 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:56.893 09:32:24 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:56.893 09:32:24 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:56.893 09:32:24 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:14:56.893 09:32:24 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:14:56.893 09:32:24 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:14:56.893 09:32:24 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:14:56.893 09:32:24 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:14:56.893 09:32:24 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:14:56.893 09:32:24 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:14:56.893 09:32:24 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:56.893 09:32:24 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:14:56.893 09:32:24 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:14:56.893 09:32:24 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:56.893 09:32:24 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:56.893 09:32:24 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:14:56.893 09:32:24 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:14:56.893 09:32:24 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:56.893 09:32:24 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:14:56.893 09:32:24 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:14:56.893 09:32:24 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:14:56.893 09:32:24 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:14:56.893 09:32:24 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:56.893 09:32:24 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:14:56.893 09:32:24 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:14:56.893 09:32:24 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:56.893 09:32:24 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:56.893 09:32:24 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:14:56.893 09:32:24 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:56.893 09:32:24 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:56.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.893 --rc genhtml_branch_coverage=1 00:14:56.893 --rc genhtml_function_coverage=1 00:14:56.893 --rc genhtml_legend=1 00:14:56.893 --rc geninfo_all_blocks=1 00:14:56.893 --rc geninfo_unexecuted_blocks=1 00:14:56.894 00:14:56.894 ' 00:14:56.894 09:32:24 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:56.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.894 --rc genhtml_branch_coverage=1 00:14:56.894 --rc genhtml_function_coverage=1 00:14:56.894 --rc genhtml_legend=1 00:14:56.894 --rc geninfo_all_blocks=1 00:14:56.894 --rc geninfo_unexecuted_blocks=1 00:14:56.894 00:14:56.894 ' 00:14:56.894 09:32:24 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:56.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.894 --rc genhtml_branch_coverage=1 00:14:56.894 --rc genhtml_function_coverage=1 00:14:56.894 --rc genhtml_legend=1 00:14:56.894 --rc geninfo_all_blocks=1 00:14:56.894 --rc geninfo_unexecuted_blocks=1 00:14:56.894 00:14:56.894 ' 00:14:56.894 09:32:24 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:56.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.894 --rc genhtml_branch_coverage=1 00:14:56.894 --rc genhtml_function_coverage=1 00:14:56.894 --rc genhtml_legend=1 00:14:56.894 --rc geninfo_all_blocks=1 00:14:56.894 --rc geninfo_unexecuted_blocks=1 00:14:56.894 00:14:56.894 ' 00:14:56.894 09:32:24 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:14:56.894 09:32:24 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:14:56.894 09:32:24 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:14:56.894 09:32:24 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:56.894 09:32:24 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:14:56.894 09:32:24 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:14:56.894 09:32:24 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:14:56.894 09:32:24 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:14:56.894 09:32:24 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:14:56.894 09:32:24 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:14:56.894 09:32:24 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:14:56.894 09:32:24 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:14:56.894 09:32:24 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:14:56.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.894 09:32:24 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:14:56.894 09:32:24 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:14:56.894 09:32:24 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:14:56.894 09:32:24 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:14:56.894 09:32:24 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:14:56.894 09:32:24 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:14:56.894 09:32:24 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:14:56.894 09:32:24 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:14:56.894 09:32:24 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:14:56.894 09:32:24 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:14:56.894 09:32:24 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:14:56.894 09:32:24 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=85035 00:14:56.894 09:32:24 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:14:56.894 09:32:24 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:14:56.894 09:32:24 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 85035 00:14:56.894 09:32:24 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 85035 ']' 00:14:56.894 09:32:24 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.894 09:32:24 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:56.894 09:32:24 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.894 09:32:24 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:56.894 09:32:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:56.894 [2024-11-29 09:32:24.544949] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:14:56.894 [2024-11-29 09:32:24.545099] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85035 ] 00:14:57.156 [2024-11-29 09:32:24.683353] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:57.156 [2024-11-29 09:32:24.710966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.156 [2024-11-29 09:32:24.741146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.099 09:32:25 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:58.099 09:32:25 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:14:58.099 09:32:25 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:14:58.099 09:32:25 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:14:58.099 09:32:25 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:14:58.099 09:32:25 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:14:58.099 09:32:25 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:58.360 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:58.934 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:14:58.934 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:14:58.935 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:14:58.935 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:14:58.935 09:32:26 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:14:58.935 09:32:26 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:14:58.935 09:32:26 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:14:58.935 09:32:26 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:14:58.935 09:32:26 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:14:58.935 09:32:26 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:14:58.935 09:32:26 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:14:58.935 09:32:26 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:14:58.935 09:32:26 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:14:58.935 09:32:26 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:14:58.935 09:32:26 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n2 00:14:58.935 09:32:26 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:14:58.935 09:32:26 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:14:58.935 09:32:26 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:14:58.935 09:32:26 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:14:58.935 09:32:26 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n3 00:14:58.935 09:32:26 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:14:58.935 09:32:26 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:14:58.935 09:32:26 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:14:58.935 09:32:26 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:14:58.935 09:32:26 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1c1n1 00:14:58.935 09:32:26 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:14:58.935 09:32:26 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:14:58.935 09:32:26 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:14:58.935 09:32:26 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:14:58.935 09:32:26 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:14:58.935 09:32:26 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:14:58.935 09:32:26 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:14:58.935 09:32:26 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:14:58.935 09:32:26 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:14:58.935 09:32:26 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:14:58.935 09:32:26 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:14:58.935 09:32:26 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:14:58.935 09:32:26 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:14:58.935 09:32:26 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:14:58.935 09:32:26 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:14:58.935 09:32:26 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:14:58.935 09:32:26 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:14:58.935 09:32:26 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:14:58.935 09:32:26 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:58.935 09:32:26 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:14:58.935 09:32:26 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:58.935 09:32:26 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:14:58.935 09:32:26 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:58.935 09:32:26 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:14:58.935 09:32:26 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:58.935 09:32:26 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:14:58.935 09:32:26 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:58.935 09:32:26 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:14:58.935 09:32:26 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:58.935 09:32:26 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:14:58.935 09:32:26 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:58.935 09:32:26 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:14:58.935 09:32:26 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:58.935 09:32:26 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:14:58.935 09:32:26 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:58.935 09:32:26 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:14:58.935 09:32:26 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:58.935 09:32:26 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:14:58.935 09:32:26 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:58.935 09:32:26 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:14:58.935 09:32:26 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:58.935 09:32:26 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:14:58.935 09:32:26 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:14:58.935 09:32:26 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:14:58.935 09:32:26 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.935 09:32:26 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:58.935 09:32:26 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:14:58.935 nvme0n1 00:14:58.935 nvme0n2 00:14:58.935 nvme0n3 00:14:58.935 nvme1n1 00:14:58.935 nvme2n1 00:14:58.935 nvme3n1 00:14:58.935 09:32:26 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.935 09:32:26 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:14:58.935 09:32:26 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.935 09:32:26 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:58.935 09:32:26 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.935 09:32:26 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:14:58.935 09:32:26 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:14:58.935 09:32:26 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.935 09:32:26 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:58.935 09:32:26 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.935 09:32:26 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:14:58.935 09:32:26 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.935 09:32:26 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:59.197 09:32:26 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.197 09:32:26 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:14:59.197 09:32:26 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.197 09:32:26 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:59.197 09:32:26 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.197 09:32:26 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:14:59.197 09:32:26 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:14:59.197 09:32:26 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.197 09:32:26 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:14:59.197 09:32:26 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:59.197 09:32:26 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.197 09:32:26 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:14:59.197 09:32:26 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:14:59.198 09:32:26 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "996a882a-3110-4f9c-8e2f-7a8f43dae92e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "996a882a-3110-4f9c-8e2f-7a8f43dae92e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "67bc65f7-694e-4749-90a1-475316b171ed"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "67bc65f7-694e-4749-90a1-475316b171ed",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "3854b9eb-35e9-4cae-a72b-34de79463685"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3854b9eb-35e9-4cae-a72b-34de79463685",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "5e7f41ab-7a8f-47d4-9b15-6f0e9acb6958"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "5e7f41ab-7a8f-47d4-9b15-6f0e9acb6958",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "879d520c-2f66-412c-b56e-b767fc65cf92"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "879d520c-2f66-412c-b56e-b767fc65cf92",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "6c44cbab-ff01-4753-af85-c10013a12cc2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "6c44cbab-ff01-4753-af85-c10013a12cc2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:14:59.198 09:32:26 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:14:59.198 09:32:26 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:14:59.198 09:32:26 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:14:59.198 09:32:26 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 85035 00:14:59.198 09:32:26 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 85035 ']' 00:14:59.198 09:32:26 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 85035 00:14:59.198 09:32:26 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:14:59.198 09:32:26 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:59.198 09:32:26 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85035 00:14:59.198 killing process with pid 85035 00:14:59.198 09:32:26 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:59.198 09:32:26 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:59.198 09:32:26 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85035' 00:14:59.198 09:32:26 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 85035 00:14:59.198 09:32:26 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 85035 00:14:59.459 09:32:27 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:59.459 09:32:27 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:14:59.459 09:32:27 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:59.459 09:32:27 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:59.459 09:32:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:59.459 ************************************ 00:14:59.459 START TEST bdev_hello_world 00:14:59.459 ************************************ 00:14:59.459 09:32:27 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:14:59.721 [2024-11-29 09:32:27.232799] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:14:59.721 [2024-11-29 09:32:27.233226] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85308 ] 00:14:59.721 [2024-11-29 09:32:27.369060] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:59.721 [2024-11-29 09:32:27.399613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.721 [2024-11-29 09:32:27.428369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.983 [2024-11-29 09:32:27.657160] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:14:59.983 [2024-11-29 09:32:27.657227] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:14:59.983 [2024-11-29 09:32:27.657253] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:14:59.983 [2024-11-29 09:32:27.659506] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:14:59.983 [2024-11-29 09:32:27.660294] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:14:59.983 [2024-11-29 09:32:27.660332] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:14:59.983 [2024-11-29 09:32:27.660661] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:14:59.983 00:14:59.983 [2024-11-29 09:32:27.660688] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:15:00.244 00:15:00.244 ************************************ 00:15:00.244 END TEST bdev_hello_world 00:15:00.244 ************************************ 00:15:00.244 real 0m0.690s 00:15:00.244 user 0m0.331s 00:15:00.244 sys 0m0.214s 00:15:00.244 09:32:27 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:00.244 09:32:27 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:15:00.244 09:32:27 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:15:00.244 09:32:27 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:00.245 09:32:27 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:00.245 09:32:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:00.245 ************************************ 00:15:00.245 START TEST bdev_bounds 00:15:00.245 ************************************ 00:15:00.245 09:32:27 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:15:00.245 09:32:27 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=85328 00:15:00.245 09:32:27 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:15:00.245 Process bdevio pid: 85328 00:15:00.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:00.245 09:32:27 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 85328' 00:15:00.245 09:32:27 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 85328 00:15:00.245 09:32:27 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 85328 ']' 00:15:00.245 09:32:27 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.245 09:32:27 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:00.245 09:32:27 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.245 09:32:27 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:00.245 09:32:27 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:00.245 09:32:27 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:00.505 [2024-11-29 09:32:27.994349] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:15:00.505 [2024-11-29 09:32:27.994492] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85328 ] 00:15:00.505 [2024-11-29 09:32:28.132881] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:00.505 [2024-11-29 09:32:28.161284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:00.505 [2024-11-29 09:32:28.193347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:00.505 [2024-11-29 09:32:28.193622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.505 [2024-11-29 09:32:28.193684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:01.450 09:32:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:01.450 09:32:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:15:01.450 09:32:28 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:01.450 I/O targets: 00:15:01.450 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:01.450 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:01.450 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:01.450 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:15:01.450 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:15:01.450 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:15:01.450 00:15:01.450 00:15:01.450 CUnit - A unit testing framework for C - Version 2.1-3 00:15:01.450 http://cunit.sourceforge.net/ 00:15:01.450 00:15:01.450 00:15:01.450 Suite: bdevio tests on: nvme3n1 00:15:01.450 Test: blockdev write read block ...passed 00:15:01.450 Test: blockdev write zeroes read block ...passed 00:15:01.450 Test: blockdev write zeroes read no split ...passed 00:15:01.450 Test: blockdev write zeroes read split ...passed 00:15:01.450 Test: blockdev write zeroes read split partial ...passed 00:15:01.450 Test: blockdev reset ...passed 00:15:01.450 Test: blockdev write read 8 blocks ...passed 00:15:01.450 Test: blockdev write read size > 128k ...passed 00:15:01.450 Test: blockdev write read invalid size ...passed 00:15:01.450 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:01.450 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:01.450 Test: blockdev write read max offset ...passed 00:15:01.450 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:01.450 Test: blockdev writev readv 8 blocks ...passed 00:15:01.450 Test: blockdev writev readv 30 x 1block ...passed 00:15:01.450 Test: blockdev writev readv block ...passed 00:15:01.450 Test: blockdev writev readv size > 128k ...passed 00:15:01.450 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:01.450 Test: blockdev comparev and writev ...passed 00:15:01.450 Test: blockdev nvme passthru rw ...passed 00:15:01.450 Test: blockdev nvme passthru vendor specific ...passed 00:15:01.450 Test: blockdev nvme admin passthru ...passed 00:15:01.450 Test: blockdev copy ...passed 00:15:01.450 Suite: bdevio tests on: nvme2n1 00:15:01.450 Test: blockdev write read block ...passed 00:15:01.450 Test: blockdev write zeroes read block ...passed 00:15:01.450 Test: blockdev write zeroes read no split ...passed 00:15:01.450 Test: blockdev write zeroes read split ...passed 00:15:01.450 Test: blockdev write zeroes read split partial ...passed 00:15:01.450 Test: blockdev reset ...passed 00:15:01.450 Test: blockdev write read 8 blocks ...passed 00:15:01.450 Test: blockdev write read size > 128k ...passed 00:15:01.450 Test: blockdev write read invalid size ...passed 00:15:01.450 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:01.450 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:01.450 Test: blockdev write read max offset ...passed 00:15:01.450 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:01.450 Test: blockdev writev readv 8 blocks ...passed 00:15:01.450 Test: blockdev writev readv 30 x 1block ...passed 00:15:01.450 Test: blockdev writev readv block ...passed 00:15:01.450 Test: blockdev writev readv size > 128k ...passed 00:15:01.450 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:01.450 Test: blockdev comparev and writev ...passed 00:15:01.450 Test: blockdev nvme passthru rw ...passed 00:15:01.450 Test: blockdev nvme passthru vendor specific ...passed 00:15:01.450 Test: blockdev nvme admin passthru ...passed 00:15:01.450 Test: blockdev copy ...passed 00:15:01.450 Suite: bdevio tests on: nvme1n1 00:15:01.450 Test: blockdev write read block ...passed 00:15:01.450 Test: blockdev write zeroes read block ...passed 00:15:01.450 Test: blockdev write zeroes read no split ...passed 00:15:01.450 Test: blockdev write zeroes read split ...passed 00:15:01.450 Test: blockdev write zeroes read split partial ...passed 00:15:01.450 Test: blockdev reset ...passed 00:15:01.450 Test: blockdev write read 8 blocks ...passed 00:15:01.450 Test: blockdev write read size > 128k ...passed 00:15:01.450 Test: blockdev write read invalid size ...passed 00:15:01.450 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:01.450 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:01.450 Test: blockdev write read max offset ...passed 00:15:01.450 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:01.450 Test: blockdev writev readv 8 blocks ...passed 00:15:01.450 Test: blockdev writev readv 30 x 1block ...passed 00:15:01.450 Test: blockdev writev readv block ...passed 00:15:01.450 Test: blockdev writev readv size > 128k ...passed 00:15:01.450 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:01.450 Test: blockdev comparev and writev ...passed 00:15:01.450 Test: blockdev nvme passthru rw ...passed 00:15:01.450 Test: blockdev nvme passthru vendor specific ...passed 00:15:01.450 Test: blockdev nvme admin passthru ...passed 00:15:01.450 Test: blockdev copy ...passed 00:15:01.450 Suite: bdevio tests on: nvme0n3 00:15:01.450 Test: blockdev write read block ...passed 00:15:01.450 Test: blockdev write zeroes read block ...passed 00:15:01.450 Test: blockdev write zeroes read no split ...passed 00:15:01.450 Test: blockdev write zeroes read split ...passed 00:15:01.450 Test: blockdev write zeroes read split partial ...passed 00:15:01.450 Test: blockdev reset ...passed 00:15:01.450 Test: blockdev write read 8 blocks ...passed 00:15:01.450 Test: blockdev write read size > 128k ...passed 00:15:01.450 Test: blockdev write read invalid size ...passed 00:15:01.450 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:01.450 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:01.450 Test: blockdev write read max offset ...passed 00:15:01.450 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:01.450 Test: blockdev writev readv 8 blocks ...passed 00:15:01.450 Test: blockdev writev readv 30 x 1block ...passed 00:15:01.450 Test: blockdev writev readv block ...passed 00:15:01.450 Test: blockdev writev readv size > 128k ...passed 00:15:01.450 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:01.451 Test: blockdev comparev and writev ...passed 00:15:01.451 Test: blockdev nvme passthru rw ...passed 00:15:01.451 Test: blockdev nvme passthru vendor specific ...passed 00:15:01.451 Test: blockdev nvme admin passthru ...passed 00:15:01.451 Test: blockdev copy ...passed 00:15:01.451 Suite: bdevio tests on: nvme0n2 00:15:01.451 Test: blockdev write read block ...passed 00:15:01.451 Test: blockdev write zeroes read block ...passed 00:15:01.451 Test: blockdev write zeroes read no split ...passed 00:15:01.451 Test: blockdev write zeroes read split ...passed 00:15:01.451 Test: blockdev write zeroes read split partial ...passed 00:15:01.451 Test: blockdev reset ...passed 00:15:01.451 Test: blockdev write read 8 blocks ...passed 00:15:01.451 Test: blockdev write read size > 128k ...passed 00:15:01.451 Test: blockdev write read invalid size ...passed 00:15:01.451 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:01.451 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:01.451 Test: blockdev write read max offset ...passed 00:15:01.451 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:01.713 Test: blockdev writev readv 8 blocks ...passed 00:15:01.713 Test: blockdev writev readv 30 x 1block ...passed 00:15:01.713 Test: blockdev writev readv block ...passed 00:15:01.713 Test: blockdev writev readv size > 128k ...passed 00:15:01.713 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:01.713 Test: blockdev comparev and writev ...passed 00:15:01.713 Test: blockdev nvme passthru rw ...passed 00:15:01.713 Test: blockdev nvme passthru vendor specific ...passed 00:15:01.713 Test: blockdev nvme admin passthru ...passed 00:15:01.713 Test: blockdev copy ...passed 00:15:01.713 Suite: bdevio tests on: nvme0n1 00:15:01.713 Test: blockdev write read block ...passed 00:15:01.713 Test: blockdev write zeroes read block ...passed 00:15:01.713 Test: blockdev write zeroes read no split ...passed 00:15:01.713 Test: blockdev write zeroes read split ...passed 00:15:01.713 Test: blockdev write zeroes read split partial ...passed 00:15:01.713 Test: blockdev reset ...passed 00:15:01.713 Test: blockdev write read 8 blocks ...passed 00:15:01.713 Test: blockdev write read size > 128k ...passed 00:15:01.713 Test: blockdev write read invalid size ...passed 00:15:01.713 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:01.713 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:01.713 Test: blockdev write read max offset ...passed 00:15:01.713 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:01.713 Test: blockdev writev readv 8 blocks ...passed 00:15:01.713 Test: blockdev writev readv 30 x 1block ...passed 00:15:01.713 Test: blockdev writev readv block ...passed 00:15:01.713 Test: blockdev writev readv size > 128k ...passed 00:15:01.713 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:01.713 Test: blockdev comparev and writev ...passed 00:15:01.713 Test: blockdev nvme passthru rw ...passed 00:15:01.713 Test: blockdev nvme passthru vendor specific ...passed 00:15:01.713 Test: blockdev nvme admin passthru ...passed 00:15:01.713 Test: blockdev copy ...passed 00:15:01.713 00:15:01.713 Run Summary: Type Total Ran Passed Failed Inactive 00:15:01.713 suites 6 6 n/a 0 0 00:15:01.713 tests 138 138 138 0 0 00:15:01.713 asserts 780 780 780 0 n/a 00:15:01.713 00:15:01.713 Elapsed time = 0.633 seconds 00:15:01.713 0 00:15:01.713 09:32:29 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 85328 00:15:01.713 09:32:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 85328 ']' 00:15:01.713 09:32:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 85328 00:15:01.713 09:32:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:15:01.713 09:32:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:01.713 09:32:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85328 00:15:01.713 09:32:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:01.713 09:32:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:01.713 09:32:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85328' 00:15:01.713 killing process with pid 85328 00:15:01.713 09:32:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 85328 00:15:01.713 09:32:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 85328 00:15:01.975 09:32:29 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:15:01.975 00:15:01.975 real 0m1.574s 00:15:01.975 user 0m3.808s 00:15:01.975 sys 0m0.340s 00:15:01.975 09:32:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:01.975 ************************************ 00:15:01.975 END TEST bdev_bounds 00:15:01.975 ************************************ 00:15:01.975 09:32:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:01.975 09:32:29 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:01.975 09:32:29 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:01.975 09:32:29 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:01.975 09:32:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:01.975 ************************************ 00:15:01.975 START TEST bdev_nbd 00:15:01.975 ************************************ 00:15:01.975 09:32:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:01.975 09:32:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:15:01.975 09:32:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:15:01.975 09:32:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:01.975 09:32:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:01.975 09:32:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:01.975 09:32:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:15:01.975 09:32:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:15:01.975 09:32:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:15:01.975 09:32:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:01.975 09:32:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:15:01.975 09:32:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:15:01.975 09:32:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:01.975 09:32:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:15:01.975 09:32:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:01.975 09:32:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:15:01.975 09:32:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=85388 00:15:01.975 09:32:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:15:01.975 09:32:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 85388 /var/tmp/spdk-nbd.sock 00:15:01.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:01.975 09:32:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 85388 ']' 00:15:01.975 09:32:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:01.975 09:32:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:01.975 09:32:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:01.975 09:32:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:01.975 09:32:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:01.975 09:32:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:01.975 [2024-11-29 09:32:29.654773] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:15:01.975 [2024-11-29 09:32:29.654937] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:02.237 [2024-11-29 09:32:29.795188] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:02.237 [2024-11-29 09:32:29.824736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.237 [2024-11-29 09:32:29.853669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.810 09:32:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:02.810 09:32:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:15:02.810 09:32:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:02.810 09:32:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:02.810 09:32:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:02.810 09:32:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:15:02.810 09:32:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:02.810 09:32:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:02.810 09:32:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:02.810 09:32:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:15:02.810 09:32:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:15:02.810 09:32:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:15:02.810 09:32:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:15:02.810 09:32:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:02.810 09:32:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:15:03.071 09:32:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:15:03.071 09:32:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:15:03.071 09:32:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:15:03.071 09:32:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:03.071 09:32:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:03.071 09:32:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:03.071 09:32:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:03.071 09:32:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:03.071 09:32:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:03.071 09:32:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:03.072 09:32:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:03.072 09:32:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:03.072 1+0 records in 00:15:03.072 1+0 records out 00:15:03.072 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000775235 s, 5.3 MB/s 00:15:03.072 09:32:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:03.072 09:32:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:03.072 09:32:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:03.072 09:32:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:03.072 09:32:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:03.072 09:32:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:03.072 09:32:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:03.072 09:32:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:15:03.333 09:32:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:15:03.333 09:32:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:15:03.333 09:32:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:15:03.333 09:32:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:03.333 09:32:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:03.333 09:32:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:03.333 09:32:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:03.333 09:32:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:03.333 09:32:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:03.333 09:32:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:03.333 09:32:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:03.333 09:32:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:03.333 1+0 records in 00:15:03.333 1+0 records out 00:15:03.333 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108241 s, 3.8 MB/s 00:15:03.333 09:32:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:03.333 09:32:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:03.333 09:32:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:03.333 09:32:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:03.333 09:32:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:03.333 09:32:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:03.333 09:32:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:03.333 09:32:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:15:03.593 09:32:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:15:03.593 09:32:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:15:03.593 09:32:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:15:03.593 09:32:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:15:03.593 09:32:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:03.593 09:32:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:03.593 09:32:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:03.593 09:32:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:15:03.593 09:32:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:03.593 09:32:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:03.593 09:32:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:03.593 09:32:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:03.593 1+0 records in 00:15:03.593 1+0 records out 00:15:03.593 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00187962 s, 2.2 MB/s 00:15:03.593 09:32:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:03.593 09:32:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:03.593 09:32:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:03.593 09:32:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:03.593 09:32:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:03.593 09:32:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:03.593 09:32:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:03.593 09:32:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:15:03.853 09:32:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:15:03.853 09:32:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:15:03.853 09:32:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:15:03.853 09:32:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:15:03.853 09:32:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:03.853 09:32:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:03.853 09:32:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:03.853 09:32:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:15:03.853 09:32:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:03.853 09:32:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:03.853 09:32:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:03.853 09:32:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:03.853 1+0 records in 00:15:03.853 1+0 records out 00:15:03.853 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108599 s, 3.8 MB/s 00:15:03.853 09:32:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:03.853 09:32:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:03.853 09:32:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:03.853 09:32:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:03.853 09:32:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:03.853 09:32:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:03.853 09:32:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:03.853 09:32:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:15:04.114 09:32:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:15:04.114 09:32:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:15:04.114 09:32:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:15:04.114 09:32:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:15:04.114 09:32:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:04.114 09:32:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:04.114 09:32:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:04.114 09:32:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:15:04.114 09:32:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:04.114 09:32:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:04.114 09:32:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:04.114 09:32:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:04.114 1+0 records in 00:15:04.114 1+0 records out 00:15:04.114 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00136016 s, 3.0 MB/s 00:15:04.114 09:32:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:04.114 09:32:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:04.114 09:32:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:04.114 09:32:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:04.114 09:32:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:04.114 09:32:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:04.114 09:32:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:04.114 09:32:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:15:04.374 09:32:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:15:04.374 09:32:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:15:04.374 09:32:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:15:04.374 09:32:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:15:04.374 09:32:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:04.374 09:32:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:04.374 09:32:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:04.374 09:32:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:15:04.374 09:32:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:04.374 09:32:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:04.374 09:32:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:04.375 09:32:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:04.375 1+0 records in 00:15:04.375 1+0 records out 00:15:04.375 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00117924 s, 3.5 MB/s 00:15:04.375 09:32:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:04.375 09:32:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:04.375 09:32:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:04.375 09:32:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:04.375 09:32:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:04.375 09:32:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:04.375 09:32:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:04.375 09:32:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:04.636 09:32:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:04.636 { 00:15:04.636 "nbd_device": "/dev/nbd0", 00:15:04.636 "bdev_name": "nvme0n1" 00:15:04.636 }, 00:15:04.636 { 00:15:04.636 "nbd_device": "/dev/nbd1", 00:15:04.636 "bdev_name": "nvme0n2" 00:15:04.636 }, 00:15:04.636 { 00:15:04.636 "nbd_device": "/dev/nbd2", 00:15:04.636 "bdev_name": "nvme0n3" 00:15:04.636 }, 00:15:04.636 { 00:15:04.636 "nbd_device": "/dev/nbd3", 00:15:04.636 "bdev_name": "nvme1n1" 00:15:04.636 }, 00:15:04.636 { 00:15:04.636 "nbd_device": "/dev/nbd4", 00:15:04.636 "bdev_name": "nvme2n1" 00:15:04.636 }, 00:15:04.636 { 00:15:04.636 "nbd_device": "/dev/nbd5", 00:15:04.636 "bdev_name": "nvme3n1" 00:15:04.636 } 00:15:04.636 ]' 00:15:04.636 09:32:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:04.636 09:32:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:04.636 { 00:15:04.636 "nbd_device": "/dev/nbd0", 00:15:04.636 "bdev_name": "nvme0n1" 00:15:04.636 }, 00:15:04.636 { 00:15:04.636 "nbd_device": "/dev/nbd1", 00:15:04.636 "bdev_name": "nvme0n2" 00:15:04.636 }, 00:15:04.636 { 00:15:04.636 "nbd_device": "/dev/nbd2", 00:15:04.636 "bdev_name": "nvme0n3" 00:15:04.636 }, 00:15:04.636 { 00:15:04.636 "nbd_device": "/dev/nbd3", 00:15:04.636 "bdev_name": "nvme1n1" 00:15:04.636 }, 00:15:04.636 { 00:15:04.636 "nbd_device": "/dev/nbd4", 00:15:04.636 "bdev_name": "nvme2n1" 00:15:04.636 }, 00:15:04.637 { 00:15:04.637 "nbd_device": "/dev/nbd5", 00:15:04.637 "bdev_name": "nvme3n1" 00:15:04.637 } 00:15:04.637 ]' 00:15:04.637 09:32:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:04.637 09:32:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:15:04.637 09:32:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:04.637 09:32:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:15:04.637 09:32:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:04.637 09:32:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:04.637 09:32:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:04.637 09:32:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:04.897 09:32:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:04.897 09:32:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:04.897 09:32:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:04.897 09:32:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:04.897 09:32:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:04.897 09:32:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:04.897 09:32:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:04.897 09:32:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:04.897 09:32:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:04.897 09:32:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:05.158 09:32:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:05.158 09:32:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:05.158 09:32:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:05.158 09:32:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:05.158 09:32:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:05.158 09:32:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:05.158 09:32:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:05.158 09:32:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:05.158 09:32:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:05.158 09:32:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:15:05.420 09:32:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:15:05.420 09:32:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:15:05.420 09:32:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:15:05.420 09:32:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:05.420 09:32:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:05.420 09:32:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:15:05.420 09:32:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:05.420 09:32:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:05.420 09:32:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:05.420 09:32:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:15:05.681 09:32:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:15:05.681 09:32:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:15:05.681 09:32:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:15:05.681 09:32:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:05.681 09:32:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:05.681 09:32:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:15:05.681 09:32:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:05.681 09:32:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:05.681 09:32:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:05.681 09:32:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:15:05.942 09:32:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:15:05.942 09:32:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:15:05.942 09:32:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:15:05.942 09:32:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:05.942 09:32:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:05.942 09:32:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:15:05.942 09:32:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:05.942 09:32:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:05.942 09:32:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:05.942 09:32:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:15:05.942 09:32:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:15:05.942 09:32:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:15:05.942 09:32:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:15:05.942 09:32:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:05.942 09:32:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:05.942 09:32:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:15:05.942 09:32:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:05.942 09:32:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:05.942 09:32:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:05.942 09:32:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:05.942 09:32:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:06.203 09:32:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:06.203 09:32:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:06.203 09:32:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:06.203 09:32:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:06.203 09:32:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:06.203 09:32:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:06.203 09:32:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:06.203 09:32:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:06.203 09:32:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:06.203 09:32:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:15:06.203 09:32:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:15:06.203 09:32:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:15:06.203 09:32:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:06.203 09:32:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:06.203 09:32:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:06.203 09:32:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:06.203 09:32:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:06.203 09:32:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:06.203 09:32:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:06.203 09:32:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:06.203 09:32:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:06.203 09:32:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:06.203 09:32:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:06.203 09:32:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:06.203 09:32:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:15:06.203 09:32:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:06.203 09:32:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:06.204 09:32:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:15:06.464 /dev/nbd0 00:15:06.464 09:32:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:06.464 09:32:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:06.464 09:32:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:06.464 09:32:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:06.464 09:32:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:06.464 09:32:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:06.464 09:32:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:06.464 09:32:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:06.464 09:32:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:06.464 09:32:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:06.464 09:32:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:06.464 1+0 records in 00:15:06.464 1+0 records out 00:15:06.464 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00112938 s, 3.6 MB/s 00:15:06.464 09:32:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:06.464 09:32:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:06.464 09:32:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:06.465 09:32:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:06.465 09:32:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:06.465 09:32:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:06.465 09:32:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:06.465 09:32:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:15:06.726 /dev/nbd1 00:15:06.726 09:32:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:06.726 09:32:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:06.726 09:32:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:06.726 09:32:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:06.726 09:32:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:06.726 09:32:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:06.726 09:32:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:06.726 09:32:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:06.726 09:32:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:06.726 09:32:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:06.726 09:32:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:06.726 1+0 records in 00:15:06.726 1+0 records out 00:15:06.726 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00113386 s, 3.6 MB/s 00:15:06.726 09:32:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:06.726 09:32:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:06.726 09:32:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:06.726 09:32:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:06.726 09:32:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:06.726 09:32:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:06.726 09:32:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:06.726 09:32:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:15:06.987 /dev/nbd10 00:15:06.987 09:32:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:15:06.987 09:32:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:15:06.987 09:32:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:15:06.987 09:32:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:06.987 09:32:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:06.987 09:32:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:06.987 09:32:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:15:06.987 09:32:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:06.987 09:32:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:06.987 09:32:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:06.987 09:32:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:06.987 1+0 records in 00:15:06.987 1+0 records out 00:15:06.987 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000906593 s, 4.5 MB/s 00:15:06.987 09:32:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:06.987 09:32:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:06.987 09:32:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:06.987 09:32:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:06.987 09:32:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:06.987 09:32:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:06.987 09:32:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:06.987 09:32:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:15:07.248 /dev/nbd11 00:15:07.248 09:32:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:15:07.248 09:32:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:15:07.248 09:32:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:15:07.248 09:32:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:07.248 09:32:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:07.248 09:32:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:07.248 09:32:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:15:07.248 09:32:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:07.248 09:32:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:07.248 09:32:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:07.248 09:32:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:07.248 1+0 records in 00:15:07.248 1+0 records out 00:15:07.248 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108832 s, 3.8 MB/s 00:15:07.248 09:32:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:07.248 09:32:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:07.248 09:32:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:07.248 09:32:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:07.248 09:32:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:07.248 09:32:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:07.248 09:32:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:07.248 09:32:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:15:07.509 /dev/nbd12 00:15:07.509 09:32:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:15:07.509 09:32:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:15:07.509 09:32:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:15:07.509 09:32:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:07.509 09:32:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:07.509 09:32:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:07.509 09:32:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:15:07.509 09:32:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:07.509 09:32:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:07.509 09:32:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:07.509 09:32:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:07.509 1+0 records in 00:15:07.509 1+0 records out 00:15:07.509 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00144123 s, 2.8 MB/s 00:15:07.509 09:32:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:07.509 09:32:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:07.509 09:32:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:07.509 09:32:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:07.509 09:32:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:07.509 09:32:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:07.509 09:32:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:07.509 09:32:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:15:07.771 /dev/nbd13 00:15:07.771 09:32:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:15:07.771 09:32:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:15:07.771 09:32:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:15:07.771 09:32:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:07.771 09:32:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:07.771 09:32:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:07.771 09:32:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:15:07.771 09:32:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:07.771 09:32:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:07.771 09:32:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:07.771 09:32:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:07.771 1+0 records in 00:15:07.771 1+0 records out 00:15:07.771 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00140118 s, 2.9 MB/s 00:15:07.771 09:32:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:07.771 09:32:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:07.771 09:32:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:07.771 09:32:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:07.771 09:32:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:07.771 09:32:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:07.771 09:32:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:07.771 09:32:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:07.771 09:32:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:07.771 09:32:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:08.033 09:32:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:08.033 { 00:15:08.033 "nbd_device": "/dev/nbd0", 00:15:08.033 "bdev_name": "nvme0n1" 00:15:08.033 }, 00:15:08.033 { 00:15:08.033 "nbd_device": "/dev/nbd1", 00:15:08.033 "bdev_name": "nvme0n2" 00:15:08.033 }, 00:15:08.033 { 00:15:08.033 "nbd_device": "/dev/nbd10", 00:15:08.033 "bdev_name": "nvme0n3" 00:15:08.033 }, 00:15:08.033 { 00:15:08.033 "nbd_device": "/dev/nbd11", 00:15:08.033 "bdev_name": "nvme1n1" 00:15:08.033 }, 00:15:08.033 { 00:15:08.033 "nbd_device": "/dev/nbd12", 00:15:08.033 "bdev_name": "nvme2n1" 00:15:08.033 }, 00:15:08.033 { 00:15:08.033 "nbd_device": "/dev/nbd13", 00:15:08.033 "bdev_name": "nvme3n1" 00:15:08.033 } 00:15:08.033 ]' 00:15:08.033 09:32:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:08.033 { 00:15:08.033 "nbd_device": "/dev/nbd0", 00:15:08.033 "bdev_name": "nvme0n1" 00:15:08.033 }, 00:15:08.033 { 00:15:08.033 "nbd_device": "/dev/nbd1", 00:15:08.033 "bdev_name": "nvme0n2" 00:15:08.033 }, 00:15:08.033 { 00:15:08.033 "nbd_device": "/dev/nbd10", 00:15:08.033 "bdev_name": "nvme0n3" 00:15:08.033 }, 00:15:08.033 { 00:15:08.033 "nbd_device": "/dev/nbd11", 00:15:08.033 "bdev_name": "nvme1n1" 00:15:08.033 }, 00:15:08.033 { 00:15:08.033 "nbd_device": "/dev/nbd12", 00:15:08.033 "bdev_name": "nvme2n1" 00:15:08.033 }, 00:15:08.033 { 00:15:08.033 "nbd_device": "/dev/nbd13", 00:15:08.033 "bdev_name": "nvme3n1" 00:15:08.033 } 00:15:08.033 ]' 00:15:08.033 09:32:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:08.033 09:32:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:08.033 /dev/nbd1 00:15:08.033 /dev/nbd10 00:15:08.033 /dev/nbd11 00:15:08.033 /dev/nbd12 00:15:08.033 /dev/nbd13' 00:15:08.033 09:32:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:08.033 /dev/nbd1 00:15:08.033 /dev/nbd10 00:15:08.033 /dev/nbd11 00:15:08.033 /dev/nbd12 00:15:08.033 /dev/nbd13' 00:15:08.033 09:32:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:08.033 09:32:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:15:08.033 09:32:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:15:08.033 09:32:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:15:08.033 09:32:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:15:08.033 09:32:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:15:08.033 09:32:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:08.033 09:32:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:08.033 09:32:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:08.033 09:32:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:08.033 09:32:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:08.033 09:32:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:15:08.033 256+0 records in 00:15:08.033 256+0 records out 00:15:08.033 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00702852 s, 149 MB/s 00:15:08.033 09:32:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:08.033 09:32:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:08.295 256+0 records in 00:15:08.295 256+0 records out 00:15:08.295 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.241915 s, 4.3 MB/s 00:15:08.295 09:32:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:08.295 09:32:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:08.556 256+0 records in 00:15:08.556 256+0 records out 00:15:08.556 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.243524 s, 4.3 MB/s 00:15:08.556 09:32:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:08.556 09:32:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:15:08.817 256+0 records in 00:15:08.817 256+0 records out 00:15:08.817 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.24281 s, 4.3 MB/s 00:15:08.817 09:32:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:08.817 09:32:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:15:09.077 256+0 records in 00:15:09.077 256+0 records out 00:15:09.077 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.247712 s, 4.2 MB/s 00:15:09.077 09:32:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:09.077 09:32:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:15:09.338 256+0 records in 00:15:09.338 256+0 records out 00:15:09.338 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.307552 s, 3.4 MB/s 00:15:09.338 09:32:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:09.338 09:32:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:15:09.600 256+0 records in 00:15:09.600 256+0 records out 00:15:09.600 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.250538 s, 4.2 MB/s 00:15:09.600 09:32:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:15:09.600 09:32:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:09.600 09:32:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:09.600 09:32:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:09.600 09:32:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:09.600 09:32:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:09.600 09:32:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:09.600 09:32:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:09.600 09:32:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:15:09.600 09:32:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:09.600 09:32:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:15:09.600 09:32:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:09.600 09:32:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:15:09.600 09:32:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:09.600 09:32:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:15:09.600 09:32:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:09.600 09:32:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:15:09.600 09:32:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:09.600 09:32:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:15:09.600 09:32:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:09.600 09:32:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:09.600 09:32:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:09.600 09:32:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:09.600 09:32:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:09.600 09:32:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:09.600 09:32:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:09.600 09:32:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:09.861 09:32:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:09.861 09:32:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:09.861 09:32:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:09.861 09:32:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:09.861 09:32:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:09.861 09:32:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:09.862 09:32:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:09.862 09:32:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:09.862 09:32:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:09.862 09:32:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:10.151 09:32:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:10.151 09:32:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:10.151 09:32:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:10.151 09:32:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:10.151 09:32:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:10.151 09:32:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:10.151 09:32:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:10.151 09:32:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:10.151 09:32:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:10.151 09:32:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:15:10.458 09:32:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:15:10.458 09:32:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:15:10.458 09:32:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:15:10.458 09:32:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:10.458 09:32:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:10.458 09:32:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:15:10.458 09:32:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:10.458 09:32:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:10.458 09:32:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:10.458 09:32:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:15:10.720 09:32:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:15:10.720 09:32:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:15:10.720 09:32:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:15:10.720 09:32:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:10.720 09:32:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:10.720 09:32:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:15:10.720 09:32:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:10.720 09:32:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:10.720 09:32:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:10.720 09:32:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:15:10.720 09:32:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:15:10.720 09:32:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:15:10.720 09:32:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:15:10.720 09:32:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:10.720 09:32:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:10.720 09:32:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:15:10.720 09:32:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:10.720 09:32:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:10.720 09:32:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:10.720 09:32:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:15:10.980 09:32:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:15:10.980 09:32:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:15:10.980 09:32:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:15:10.980 09:32:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:10.980 09:32:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:10.980 09:32:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:15:10.980 09:32:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:10.981 09:32:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:10.981 09:32:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:10.981 09:32:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:10.981 09:32:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:11.241 09:32:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:11.241 09:32:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:11.241 09:32:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:11.241 09:32:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:11.241 09:32:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:11.241 09:32:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:11.241 09:32:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:11.241 09:32:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:11.241 09:32:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:11.241 09:32:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:15:11.241 09:32:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:11.241 09:32:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:15:11.241 09:32:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:11.241 09:32:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:11.241 09:32:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:15:11.241 09:32:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:15:11.502 malloc_lvol_verify 00:15:11.502 09:32:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:15:11.763 fbb76122-c39e-4b24-9fb6-9e37e5f4c3a2 00:15:11.763 09:32:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:15:12.024 c9f3d2a1-8d99-4949-aa28-c18def31a7cf 00:15:12.024 09:32:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:15:12.287 /dev/nbd0 00:15:12.287 09:32:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:15:12.287 09:32:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:15:12.287 09:32:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:15:12.287 09:32:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:15:12.287 09:32:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:15:12.287 mke2fs 1.47.0 (5-Feb-2023) 00:15:12.287 Discarding device blocks: 0/4096 done 00:15:12.287 Creating filesystem with 4096 1k blocks and 1024 inodes 00:15:12.287 00:15:12.287 Allocating group tables: 0/1 done 00:15:12.287 Writing inode tables: 0/1 done 00:15:12.287 Creating journal (1024 blocks): done 00:15:12.287 Writing superblocks and filesystem accounting information: 0/1 done 00:15:12.287 00:15:12.287 09:32:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:12.287 09:32:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:12.287 09:32:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:12.287 09:32:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:12.287 09:32:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:12.287 09:32:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:12.287 09:32:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:12.548 09:32:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:12.548 09:32:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:12.548 09:32:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:12.548 09:32:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:12.548 09:32:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:12.548 09:32:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:12.548 09:32:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:12.548 09:32:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:12.548 09:32:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 85388 00:15:12.548 09:32:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 85388 ']' 00:15:12.548 09:32:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 85388 00:15:12.548 09:32:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:15:12.548 09:32:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:12.548 09:32:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85388 00:15:12.548 killing process with pid 85388 00:15:12.548 09:32:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:12.548 09:32:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:12.548 09:32:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85388' 00:15:12.548 09:32:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 85388 00:15:12.548 09:32:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 85388 00:15:12.809 09:32:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:15:12.810 00:15:12.810 real 0m10.715s 00:15:12.810 user 0m14.333s 00:15:12.810 sys 0m4.068s 00:15:12.810 09:32:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:12.810 ************************************ 00:15:12.810 END TEST bdev_nbd 00:15:12.810 ************************************ 00:15:12.810 09:32:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:12.810 09:32:40 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:15:12.810 09:32:40 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:15:12.810 09:32:40 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:15:12.810 09:32:40 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:15:12.810 09:32:40 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:12.810 09:32:40 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:12.810 09:32:40 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:12.810 ************************************ 00:15:12.810 START TEST bdev_fio 00:15:12.810 ************************************ 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:15:12.810 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:12.810 ************************************ 00:15:12.810 START TEST bdev_fio_rw_verify 00:15:12.810 ************************************ 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:12.810 09:32:40 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:13.071 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:13.071 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:13.071 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:13.071 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:13.071 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:13.071 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:13.071 fio-3.35 00:15:13.071 Starting 6 threads 00:15:25.308 00:15:25.308 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=85791: Fri Nov 29 09:32:51 2024 00:15:25.308 read: IOPS=13.5k, BW=52.6MiB/s (55.2MB/s)(527MiB/10003msec) 00:15:25.308 slat (usec): min=2, max=4141, avg= 7.02, stdev=16.33 00:15:25.308 clat (usec): min=81, max=10030, avg=1440.99, stdev=775.60 00:15:25.308 lat (usec): min=85, max=10045, avg=1448.01, stdev=776.25 00:15:25.308 clat percentiles (usec): 00:15:25.308 | 50.000th=[ 1352], 99.000th=[ 3851], 99.900th=[ 5080], 99.990th=[ 6980], 00:15:25.308 | 99.999th=[10028] 00:15:25.308 write: IOPS=13.8k, BW=53.9MiB/s (56.6MB/s)(540MiB/10003msec); 0 zone resets 00:15:25.308 slat (usec): min=13, max=7300, avg=42.37, stdev=145.28 00:15:25.308 clat (usec): min=83, max=9904, avg=1733.86, stdev=855.85 00:15:25.308 lat (usec): min=98, max=10285, avg=1776.23, stdev=868.17 00:15:25.308 clat percentiles (usec): 00:15:25.308 | 50.000th=[ 1614], 99.000th=[ 4359], 99.900th=[ 6063], 99.990th=[ 8586], 00:15:25.308 | 99.999th=[ 9896] 00:15:25.308 bw ( KiB/s): min=47981, max=71034, per=100.00%, avg=55409.58, stdev=1139.60, samples=114 00:15:25.308 iops : min=11992, max=17758, avg=13851.47, stdev=284.94, samples=114 00:15:25.308 lat (usec) : 100=0.01%, 250=1.25%, 500=5.08%, 750=7.38%, 1000=10.48% 00:15:25.308 lat (msec) : 2=50.40%, 4=24.18%, 10=1.24%, 20=0.01% 00:15:25.308 cpu : usr=44.10%, sys=31.59%, ctx=5516, majf=0, minf=14099 00:15:25.308 IO depths : 1=11.4%, 2=23.8%, 4=51.1%, 8=13.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:25.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:25.308 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:25.308 issued rwts: total=134792,138153,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:25.308 latency : target=0, window=0, percentile=100.00%, depth=8 00:15:25.308 00:15:25.308 Run status group 0 (all jobs): 00:15:25.308 READ: bw=52.6MiB/s (55.2MB/s), 52.6MiB/s-52.6MiB/s (55.2MB/s-55.2MB/s), io=527MiB (552MB), run=10003-10003msec 00:15:25.308 WRITE: bw=53.9MiB/s (56.6MB/s), 53.9MiB/s-53.9MiB/s (56.6MB/s-56.6MB/s), io=540MiB (566MB), run=10003-10003msec 00:15:25.308 ----------------------------------------------------- 00:15:25.308 Suppressions used: 00:15:25.308 count bytes template 00:15:25.308 6 48 /usr/src/fio/parse.c 00:15:25.308 3275 314400 /usr/src/fio/iolog.c 00:15:25.308 1 8 libtcmalloc_minimal.so 00:15:25.308 1 904 libcrypto.so 00:15:25.308 ----------------------------------------------------- 00:15:25.308 00:15:25.308 00:15:25.308 real 0m11.227s 00:15:25.308 user 0m27.211s 00:15:25.308 sys 0m19.290s 00:15:25.308 09:32:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:25.308 ************************************ 00:15:25.308 END TEST bdev_fio_rw_verify 00:15:25.308 ************************************ 00:15:25.308 09:32:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:15:25.308 09:32:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:15:25.308 09:32:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:25.308 09:32:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:15:25.308 09:32:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:25.308 09:32:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:15:25.308 09:32:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:15:25.308 09:32:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:15:25.308 09:32:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:15:25.308 09:32:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:25.308 09:32:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:15:25.308 09:32:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:15:25.308 09:32:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:25.308 09:32:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:15:25.308 09:32:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:15:25.308 09:32:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:15:25.308 09:32:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:15:25.308 09:32:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:15:25.309 09:32:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "996a882a-3110-4f9c-8e2f-7a8f43dae92e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "996a882a-3110-4f9c-8e2f-7a8f43dae92e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "67bc65f7-694e-4749-90a1-475316b171ed"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "67bc65f7-694e-4749-90a1-475316b171ed",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "3854b9eb-35e9-4cae-a72b-34de79463685"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3854b9eb-35e9-4cae-a72b-34de79463685",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "5e7f41ab-7a8f-47d4-9b15-6f0e9acb6958"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "5e7f41ab-7a8f-47d4-9b15-6f0e9acb6958",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "879d520c-2f66-412c-b56e-b767fc65cf92"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "879d520c-2f66-412c-b56e-b767fc65cf92",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "6c44cbab-ff01-4753-af85-c10013a12cc2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "6c44cbab-ff01-4753-af85-c10013a12cc2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:25.309 09:32:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:15:25.309 09:32:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:25.309 /home/vagrant/spdk_repo/spdk 00:15:25.309 09:32:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:15:25.309 09:32:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:15:25.309 09:32:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:15:25.309 00:15:25.309 real 0m11.402s 00:15:25.309 user 0m27.274s 00:15:25.309 sys 0m19.381s 00:15:25.309 ************************************ 00:15:25.309 END TEST bdev_fio 00:15:25.309 ************************************ 00:15:25.309 09:32:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:25.309 09:32:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:25.309 09:32:51 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:25.309 09:32:51 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:25.309 09:32:51 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:15:25.309 09:32:51 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:25.309 09:32:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:25.309 ************************************ 00:15:25.309 START TEST bdev_verify 00:15:25.309 ************************************ 00:15:25.309 09:32:51 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:25.309 [2024-11-29 09:32:51.897034] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:15:25.309 [2024-11-29 09:32:51.897173] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85965 ] 00:15:25.309 [2024-11-29 09:32:52.034413] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:25.309 [2024-11-29 09:32:52.064340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:25.309 [2024-11-29 09:32:52.094813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:25.309 [2024-11-29 09:32:52.094883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.309 Running I/O for 5 seconds... 00:15:27.267 22752.00 IOPS, 88.88 MiB/s [2024-11-29T09:32:55.564Z] 24128.00 IOPS, 94.25 MiB/s [2024-11-29T09:32:56.953Z] 23552.00 IOPS, 92.00 MiB/s [2024-11-29T09:32:57.527Z] 23600.00 IOPS, 92.19 MiB/s [2024-11-29T09:32:57.527Z] 23814.40 IOPS, 93.03 MiB/s 00:15:29.801 Latency(us) 00:15:29.801 [2024-11-29T09:32:57.527Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:29.801 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:29.801 Verification LBA range: start 0x0 length 0x80000 00:15:29.801 nvme0n1 : 5.05 1926.03 7.52 0.00 0.00 66334.25 4713.55 75820.11 00:15:29.801 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:29.801 Verification LBA range: start 0x80000 length 0x80000 00:15:29.801 nvme0n1 : 5.07 1894.13 7.40 0.00 0.00 67455.23 7662.67 71787.13 00:15:29.801 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:29.801 Verification LBA range: start 0x0 length 0x80000 00:15:29.801 nvme0n2 : 5.03 1906.81 7.45 0.00 0.00 66868.80 5721.80 66544.25 00:15:29.801 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:29.801 Verification LBA range: start 0x80000 length 0x80000 00:15:29.801 nvme0n2 : 5.07 1892.60 7.39 0.00 0.00 67393.43 8721.33 65334.35 00:15:29.801 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:29.801 Verification LBA range: start 0x0 length 0x80000 00:15:29.801 nvme0n3 : 5.04 1906.17 7.45 0.00 0.00 66766.69 12603.08 61301.37 00:15:29.801 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:29.801 Verification LBA range: start 0x80000 length 0x80000 00:15:29.801 nvme0n3 : 5.04 1904.69 7.44 0.00 0.00 66843.80 8922.98 66544.25 00:15:29.801 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:29.801 Verification LBA range: start 0x0 length 0x20000 00:15:29.801 nvme1n1 : 5.05 1901.60 7.43 0.00 0.00 66784.52 10889.06 64931.05 00:15:29.801 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:29.801 Verification LBA range: start 0x20000 length 0x20000 00:15:29.801 nvme1n1 : 5.08 1889.67 7.38 0.00 0.00 67256.44 9023.80 64124.46 00:15:29.801 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:29.801 Verification LBA range: start 0x0 length 0xbd0bd 00:15:29.801 nvme2n1 : 5.07 2385.79 9.32 0.00 0.00 53081.86 6427.57 56865.08 00:15:29.801 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:29.801 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:15:29.801 nvme2n1 : 5.08 2462.26 9.62 0.00 0.00 51411.51 6175.51 58478.28 00:15:29.801 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:29.801 Verification LBA range: start 0x0 length 0xa0000 00:15:29.801 nvme3n1 : 5.07 1869.90 7.30 0.00 0.00 67646.75 6276.33 88725.66 00:15:29.801 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:29.801 Verification LBA range: start 0xa0000 length 0xa0000 00:15:29.801 nvme3n1 : 5.08 1636.76 6.39 0.00 0.00 77388.70 6503.19 91145.45 00:15:29.801 [2024-11-29T09:32:57.527Z] =================================================================================================================== 00:15:29.801 [2024-11-29T09:32:57.527Z] Total : 23576.40 92.10 0.00 0.00 64705.72 4713.55 91145.45 00:15:30.064 00:15:30.064 real 0m5.850s 00:15:30.064 user 0m9.321s 00:15:30.064 sys 0m1.488s 00:15:30.064 09:32:57 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:30.065 ************************************ 00:15:30.065 END TEST bdev_verify 00:15:30.065 ************************************ 00:15:30.065 09:32:57 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:15:30.065 09:32:57 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:30.065 09:32:57 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:15:30.065 09:32:57 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:30.065 09:32:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:30.065 ************************************ 00:15:30.065 START TEST bdev_verify_big_io 00:15:30.065 ************************************ 00:15:30.065 09:32:57 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:30.325 [2024-11-29 09:32:57.821778] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:15:30.326 [2024-11-29 09:32:57.821915] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86055 ] 00:15:30.326 [2024-11-29 09:32:57.959479] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:30.326 [2024-11-29 09:32:57.989163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:30.326 [2024-11-29 09:32:58.019699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.326 [2024-11-29 09:32:58.019723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:30.587 Running I/O for 5 seconds... 00:15:36.713 1712.00 IOPS, 107.00 MiB/s [2024-11-29T09:33:04.439Z] 3232.00 IOPS, 202.00 MiB/s [2024-11-29T09:33:04.439Z] 3042.67 IOPS, 190.17 MiB/s 00:15:36.713 Latency(us) 00:15:36.713 [2024-11-29T09:33:04.439Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:36.713 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:36.713 Verification LBA range: start 0x0 length 0x8000 00:15:36.713 nvme0n1 : 5.89 86.93 5.43 0.00 0.00 1447872.05 7713.08 1219574.55 00:15:36.713 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:36.713 Verification LBA range: start 0x8000 length 0x8000 00:15:36.713 nvme0n1 : 5.92 151.42 9.46 0.00 0.00 820850.38 8519.68 916294.10 00:15:36.713 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:36.713 Verification LBA range: start 0x0 length 0x8000 00:15:36.713 nvme0n2 : 5.93 86.41 5.40 0.00 0.00 1317749.37 237139.50 1355082.83 00:15:36.713 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:36.713 Verification LBA range: start 0x8000 length 0x8000 00:15:36.713 nvme0n2 : 5.90 149.24 9.33 0.00 0.00 793056.64 129862.10 764653.88 00:15:36.713 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:36.713 Verification LBA range: start 0x0 length 0x8000 00:15:36.713 nvme0n3 : 5.97 75.04 4.69 0.00 0.00 1568688.78 35490.26 3329632.10 00:15:36.713 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:36.713 Verification LBA range: start 0x8000 length 0x8000 00:15:36.713 nvme0n3 : 5.68 146.47 9.15 0.00 0.00 800113.12 121796.14 1716438.25 00:15:36.713 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:36.713 Verification LBA range: start 0x0 length 0x2000 00:15:36.713 nvme1n1 : 5.94 75.46 4.72 0.00 0.00 1508902.32 7763.50 3510309.81 00:15:36.713 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:36.713 Verification LBA range: start 0x2000 length 0x2000 00:15:36.713 nvme1n1 : 5.89 162.91 10.18 0.00 0.00 700093.36 111310.38 1096971.82 00:15:36.713 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:36.713 Verification LBA range: start 0x0 length 0xbd0b 00:15:36.713 nvme2n1 : 5.94 148.13 9.26 0.00 0.00 737517.34 8670.92 1071160.71 00:15:36.713 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:36.713 Verification LBA range: start 0xbd0b length 0xbd0b 00:15:36.713 nvme2n1 : 5.91 184.15 11.51 0.00 0.00 608518.83 8368.44 1084066.26 00:15:36.713 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:36.713 Verification LBA range: start 0x0 length 0xa000 00:15:36.713 nvme3n1 : 5.98 128.42 8.03 0.00 0.00 812845.06 1323.32 1742249.35 00:15:36.713 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:36.713 Verification LBA range: start 0xa000 length 0xa000 00:15:36.713 nvme3n1 : 5.91 165.09 10.32 0.00 0.00 663099.53 4763.96 1413157.81 00:15:36.713 [2024-11-29T09:33:04.439Z] =================================================================================================================== 00:15:36.713 [2024-11-29T09:33:04.439Z] Total : 1559.68 97.48 0.00 0.00 885700.91 1323.32 3510309.81 00:15:36.974 00:15:36.974 real 0m6.788s 00:15:36.974 user 0m12.480s 00:15:36.974 sys 0m0.418s 00:15:36.974 09:33:04 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:36.974 ************************************ 00:15:36.974 END TEST bdev_verify_big_io 00:15:36.974 ************************************ 00:15:36.974 09:33:04 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:15:36.974 09:33:04 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:36.974 09:33:04 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:15:36.974 09:33:04 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:36.974 09:33:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:36.974 ************************************ 00:15:36.974 START TEST bdev_write_zeroes 00:15:36.974 ************************************ 00:15:36.974 09:33:04 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:36.974 [2024-11-29 09:33:04.684711] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:15:36.974 [2024-11-29 09:33:04.684842] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86148 ] 00:15:37.233 [2024-11-29 09:33:04.821682] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:37.233 [2024-11-29 09:33:04.852136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.233 [2024-11-29 09:33:04.880978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:37.495 Running I/O for 1 seconds... 00:15:38.696 68096.00 IOPS, 266.00 MiB/s 00:15:38.696 Latency(us) 00:15:38.696 [2024-11-29T09:33:06.422Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:38.696 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:38.696 nvme0n1 : 1.02 11272.91 44.03 0.00 0.00 11342.92 8318.03 28230.89 00:15:38.696 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:38.696 nvme0n2 : 1.02 11260.26 43.99 0.00 0.00 11345.11 8368.44 26617.70 00:15:38.696 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:38.696 nvme0n3 : 1.02 11122.11 43.45 0.00 0.00 11472.60 8418.86 27424.30 00:15:38.696 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:38.696 nvme1n1 : 1.03 11234.55 43.88 0.00 0.00 11348.36 8418.86 23492.14 00:15:38.696 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:38.696 nvme2n1 : 1.02 11677.26 45.61 0.00 0.00 10906.78 5772.21 20164.92 00:15:38.696 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:38.696 nvme3n1 : 1.02 11285.70 44.08 0.00 0.00 11209.35 3982.57 26819.35 00:15:38.696 [2024-11-29T09:33:06.422Z] =================================================================================================================== 00:15:38.696 [2024-11-29T09:33:06.422Z] Total : 67852.79 265.05 0.00 0.00 11268.46 3982.57 28230.89 00:15:38.696 00:15:38.696 real 0m1.765s 00:15:38.696 user 0m1.036s 00:15:38.696 sys 0m0.537s 00:15:38.696 09:33:06 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:38.696 ************************************ 00:15:38.696 END TEST bdev_write_zeroes 00:15:38.696 ************************************ 00:15:38.696 09:33:06 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:15:38.956 09:33:06 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:38.956 09:33:06 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:15:38.956 09:33:06 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:38.956 09:33:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:38.956 ************************************ 00:15:38.956 START TEST bdev_json_nonenclosed 00:15:38.956 ************************************ 00:15:38.956 09:33:06 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:38.956 [2024-11-29 09:33:06.514302] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:15:38.956 [2024-11-29 09:33:06.514438] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86192 ] 00:15:38.956 [2024-11-29 09:33:06.649528] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:38.956 [2024-11-29 09:33:06.675110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.218 [2024-11-29 09:33:06.703987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.218 [2024-11-29 09:33:06.704096] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:15:39.218 [2024-11-29 09:33:06.704119] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:39.218 [2024-11-29 09:33:06.704129] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:39.218 00:15:39.218 real 0m0.344s 00:15:39.218 user 0m0.122s 00:15:39.218 sys 0m0.118s 00:15:39.218 09:33:06 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:39.218 ************************************ 00:15:39.218 END TEST bdev_json_nonenclosed 00:15:39.218 ************************************ 00:15:39.218 09:33:06 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:15:39.218 09:33:06 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:39.218 09:33:06 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:15:39.218 09:33:06 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:39.218 09:33:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:39.218 ************************************ 00:15:39.218 START TEST bdev_json_nonarray 00:15:39.218 ************************************ 00:15:39.218 09:33:06 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:39.218 [2024-11-29 09:33:06.933312] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:15:39.218 [2024-11-29 09:33:06.933663] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86213 ] 00:15:39.479 [2024-11-29 09:33:07.076733] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:39.479 [2024-11-29 09:33:07.107646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.479 [2024-11-29 09:33:07.136045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.479 [2024-11-29 09:33:07.136152] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:15:39.479 [2024-11-29 09:33:07.136171] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:39.479 [2024-11-29 09:33:07.136181] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:39.740 00:15:39.740 real 0m0.364s 00:15:39.740 user 0m0.130s 00:15:39.740 sys 0m0.126s 00:15:39.740 09:33:07 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:39.740 09:33:07 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:15:39.740 ************************************ 00:15:39.740 END TEST bdev_json_nonarray 00:15:39.740 ************************************ 00:15:39.740 09:33:07 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:15:39.740 09:33:07 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:15:39.740 09:33:07 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:15:39.740 09:33:07 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:15:39.740 09:33:07 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:15:39.740 09:33:07 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:15:39.740 09:33:07 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:39.740 09:33:07 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:15:39.740 09:33:07 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:15:39.740 09:33:07 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:15:39.740 09:33:07 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:15:39.740 09:33:07 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:40.314 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:48.458 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:15:48.458 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:48.458 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:48.458 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:15:48.458 ************************************ 00:15:48.458 END TEST blockdev_xnvme 00:15:48.458 ************************************ 00:15:48.458 00:15:48.458 real 0m51.408s 00:15:48.458 user 1m12.959s 00:15:48.458 sys 0m39.838s 00:15:48.458 09:33:15 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:48.458 09:33:15 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:48.458 09:33:15 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:15:48.458 09:33:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:48.458 09:33:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:48.458 09:33:15 -- common/autotest_common.sh@10 -- # set +x 00:15:48.458 ************************************ 00:15:48.458 START TEST ublk 00:15:48.458 ************************************ 00:15:48.458 09:33:15 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:15:48.458 * Looking for test storage... 00:15:48.458 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:15:48.458 09:33:15 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:48.458 09:33:15 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:48.458 09:33:15 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:15:48.458 09:33:15 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:48.458 09:33:15 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:48.458 09:33:15 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:48.458 09:33:15 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:48.458 09:33:15 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:15:48.458 09:33:15 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:15:48.458 09:33:15 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:15:48.458 09:33:15 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:15:48.458 09:33:15 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:15:48.458 09:33:15 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:15:48.458 09:33:15 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:15:48.458 09:33:15 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:48.458 09:33:15 ublk -- scripts/common.sh@344 -- # case "$op" in 00:15:48.458 09:33:15 ublk -- scripts/common.sh@345 -- # : 1 00:15:48.458 09:33:15 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:48.458 09:33:15 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:48.458 09:33:15 ublk -- scripts/common.sh@365 -- # decimal 1 00:15:48.458 09:33:15 ublk -- scripts/common.sh@353 -- # local d=1 00:15:48.458 09:33:15 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:48.458 09:33:15 ublk -- scripts/common.sh@355 -- # echo 1 00:15:48.458 09:33:15 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:15:48.458 09:33:15 ublk -- scripts/common.sh@366 -- # decimal 2 00:15:48.458 09:33:15 ublk -- scripts/common.sh@353 -- # local d=2 00:15:48.458 09:33:15 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:48.458 09:33:15 ublk -- scripts/common.sh@355 -- # echo 2 00:15:48.458 09:33:15 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:15:48.458 09:33:15 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:48.458 09:33:15 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:48.459 09:33:15 ublk -- scripts/common.sh@368 -- # return 0 00:15:48.459 09:33:15 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:48.459 09:33:15 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:48.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.459 --rc genhtml_branch_coverage=1 00:15:48.459 --rc genhtml_function_coverage=1 00:15:48.459 --rc genhtml_legend=1 00:15:48.459 --rc geninfo_all_blocks=1 00:15:48.459 --rc geninfo_unexecuted_blocks=1 00:15:48.459 00:15:48.459 ' 00:15:48.459 09:33:15 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:48.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.459 --rc genhtml_branch_coverage=1 00:15:48.459 --rc genhtml_function_coverage=1 00:15:48.459 --rc genhtml_legend=1 00:15:48.459 --rc geninfo_all_blocks=1 00:15:48.459 --rc geninfo_unexecuted_blocks=1 00:15:48.459 00:15:48.459 ' 00:15:48.459 09:33:15 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:48.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.459 --rc genhtml_branch_coverage=1 00:15:48.459 --rc genhtml_function_coverage=1 00:15:48.459 --rc genhtml_legend=1 00:15:48.459 --rc geninfo_all_blocks=1 00:15:48.459 --rc geninfo_unexecuted_blocks=1 00:15:48.459 00:15:48.459 ' 00:15:48.459 09:33:15 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:48.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.459 --rc genhtml_branch_coverage=1 00:15:48.459 --rc genhtml_function_coverage=1 00:15:48.459 --rc genhtml_legend=1 00:15:48.459 --rc geninfo_all_blocks=1 00:15:48.459 --rc geninfo_unexecuted_blocks=1 00:15:48.459 00:15:48.459 ' 00:15:48.459 09:33:15 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:15:48.459 09:33:15 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:15:48.459 09:33:15 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:15:48.459 09:33:15 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:15:48.459 09:33:15 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:15:48.459 09:33:15 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:15:48.459 09:33:15 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:15:48.459 09:33:15 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:15:48.459 09:33:15 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:15:48.459 09:33:15 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:15:48.459 09:33:15 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:15:48.459 09:33:15 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:15:48.459 09:33:15 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:15:48.459 09:33:15 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:15:48.459 09:33:15 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:15:48.459 09:33:15 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:15:48.459 09:33:15 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:15:48.459 09:33:15 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:15:48.459 09:33:15 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:15:48.459 09:33:15 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:15:48.459 09:33:15 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:48.459 09:33:15 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:48.459 09:33:15 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:48.459 ************************************ 00:15:48.459 START TEST test_save_ublk_config 00:15:48.459 ************************************ 00:15:48.459 09:33:15 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:15:48.459 09:33:15 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:15:48.459 09:33:15 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=86522 00:15:48.459 09:33:15 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:15:48.459 09:33:15 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 86522 00:15:48.459 09:33:15 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 86522 ']' 00:15:48.459 09:33:15 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:48.459 09:33:15 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:15:48.459 09:33:15 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:48.459 09:33:15 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:48.459 09:33:15 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:48.459 09:33:15 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:48.459 [2024-11-29 09:33:16.057216] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:15:48.459 [2024-11-29 09:33:16.057371] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86522 ] 00:15:48.721 [2024-11-29 09:33:16.194173] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:48.721 [2024-11-29 09:33:16.224253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:48.721 [2024-11-29 09:33:16.252725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.295 09:33:16 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:49.295 09:33:16 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:15:49.295 09:33:16 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:15:49.295 09:33:16 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:15:49.295 09:33:16 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.295 09:33:16 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:49.295 [2024-11-29 09:33:16.907612] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:49.295 [2024-11-29 09:33:16.908517] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:49.295 malloc0 00:15:49.295 [2024-11-29 09:33:16.939764] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:15:49.295 [2024-11-29 09:33:16.939858] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:15:49.295 [2024-11-29 09:33:16.939874] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:49.295 [2024-11-29 09:33:16.939881] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:49.295 [2024-11-29 09:33:16.948709] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:49.295 [2024-11-29 09:33:16.948734] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:49.295 [2024-11-29 09:33:16.955632] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:49.295 [2024-11-29 09:33:16.955739] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:49.295 [2024-11-29 09:33:16.972787] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:49.295 0 00:15:49.295 09:33:16 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.295 09:33:16 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:15:49.295 09:33:16 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.295 09:33:16 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:49.557 09:33:17 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.557 09:33:17 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:15:49.557 "subsystems": [ 00:15:49.557 { 00:15:49.557 "subsystem": "fsdev", 00:15:49.557 "config": [ 00:15:49.557 { 00:15:49.557 "method": "fsdev_set_opts", 00:15:49.557 "params": { 00:15:49.557 "fsdev_io_pool_size": 65535, 00:15:49.557 "fsdev_io_cache_size": 256 00:15:49.557 } 00:15:49.557 } 00:15:49.557 ] 00:15:49.557 }, 00:15:49.557 { 00:15:49.557 "subsystem": "keyring", 00:15:49.557 "config": [] 00:15:49.557 }, 00:15:49.557 { 00:15:49.557 "subsystem": "iobuf", 00:15:49.557 "config": [ 00:15:49.557 { 00:15:49.557 "method": "iobuf_set_options", 00:15:49.557 "params": { 00:15:49.557 "small_pool_count": 8192, 00:15:49.557 "large_pool_count": 1024, 00:15:49.557 "small_bufsize": 8192, 00:15:49.557 "large_bufsize": 135168, 00:15:49.557 "enable_numa": false 00:15:49.557 } 00:15:49.557 } 00:15:49.557 ] 00:15:49.557 }, 00:15:49.557 { 00:15:49.557 "subsystem": "sock", 00:15:49.557 "config": [ 00:15:49.557 { 00:15:49.557 "method": "sock_set_default_impl", 00:15:49.557 "params": { 00:15:49.557 "impl_name": "posix" 00:15:49.557 } 00:15:49.557 }, 00:15:49.557 { 00:15:49.557 "method": "sock_impl_set_options", 00:15:49.558 "params": { 00:15:49.558 "impl_name": "ssl", 00:15:49.558 "recv_buf_size": 4096, 00:15:49.558 "send_buf_size": 4096, 00:15:49.558 "enable_recv_pipe": true, 00:15:49.558 "enable_quickack": false, 00:15:49.558 "enable_placement_id": 0, 00:15:49.558 "enable_zerocopy_send_server": true, 00:15:49.558 "enable_zerocopy_send_client": false, 00:15:49.558 "zerocopy_threshold": 0, 00:15:49.558 "tls_version": 0, 00:15:49.558 "enable_ktls": false 00:15:49.558 } 00:15:49.558 }, 00:15:49.558 { 00:15:49.558 "method": "sock_impl_set_options", 00:15:49.558 "params": { 00:15:49.558 "impl_name": "posix", 00:15:49.558 "recv_buf_size": 2097152, 00:15:49.558 "send_buf_size": 2097152, 00:15:49.558 "enable_recv_pipe": true, 00:15:49.558 "enable_quickack": false, 00:15:49.558 "enable_placement_id": 0, 00:15:49.558 "enable_zerocopy_send_server": true, 00:15:49.558 "enable_zerocopy_send_client": false, 00:15:49.558 "zerocopy_threshold": 0, 00:15:49.558 "tls_version": 0, 00:15:49.558 "enable_ktls": false 00:15:49.558 } 00:15:49.558 } 00:15:49.558 ] 00:15:49.558 }, 00:15:49.558 { 00:15:49.558 "subsystem": "vmd", 00:15:49.558 "config": [] 00:15:49.558 }, 00:15:49.558 { 00:15:49.558 "subsystem": "accel", 00:15:49.558 "config": [ 00:15:49.558 { 00:15:49.558 "method": "accel_set_options", 00:15:49.558 "params": { 00:15:49.558 "small_cache_size": 128, 00:15:49.558 "large_cache_size": 16, 00:15:49.558 "task_count": 2048, 00:15:49.558 "sequence_count": 2048, 00:15:49.558 "buf_count": 2048 00:15:49.558 } 00:15:49.558 } 00:15:49.558 ] 00:15:49.558 }, 00:15:49.558 { 00:15:49.558 "subsystem": "bdev", 00:15:49.558 "config": [ 00:15:49.558 { 00:15:49.558 "method": "bdev_set_options", 00:15:49.558 "params": { 00:15:49.558 "bdev_io_pool_size": 65535, 00:15:49.558 "bdev_io_cache_size": 256, 00:15:49.558 "bdev_auto_examine": true, 00:15:49.558 "iobuf_small_cache_size": 128, 00:15:49.558 "iobuf_large_cache_size": 16 00:15:49.558 } 00:15:49.558 }, 00:15:49.558 { 00:15:49.558 "method": "bdev_raid_set_options", 00:15:49.558 "params": { 00:15:49.558 "process_window_size_kb": 1024, 00:15:49.558 "process_max_bandwidth_mb_sec": 0 00:15:49.558 } 00:15:49.558 }, 00:15:49.558 { 00:15:49.558 "method": "bdev_iscsi_set_options", 00:15:49.558 "params": { 00:15:49.558 "timeout_sec": 30 00:15:49.558 } 00:15:49.558 }, 00:15:49.558 { 00:15:49.558 "method": "bdev_nvme_set_options", 00:15:49.558 "params": { 00:15:49.558 "action_on_timeout": "none", 00:15:49.558 "timeout_us": 0, 00:15:49.558 "timeout_admin_us": 0, 00:15:49.558 "keep_alive_timeout_ms": 10000, 00:15:49.558 "arbitration_burst": 0, 00:15:49.558 "low_priority_weight": 0, 00:15:49.558 "medium_priority_weight": 0, 00:15:49.558 "high_priority_weight": 0, 00:15:49.558 "nvme_adminq_poll_period_us": 10000, 00:15:49.558 "nvme_ioq_poll_period_us": 0, 00:15:49.558 "io_queue_requests": 0, 00:15:49.558 "delay_cmd_submit": true, 00:15:49.558 "transport_retry_count": 4, 00:15:49.558 "bdev_retry_count": 3, 00:15:49.558 "transport_ack_timeout": 0, 00:15:49.558 "ctrlr_loss_timeout_sec": 0, 00:15:49.558 "reconnect_delay_sec": 0, 00:15:49.558 "fast_io_fail_timeout_sec": 0, 00:15:49.558 "disable_auto_failback": false, 00:15:49.558 "generate_uuids": false, 00:15:49.558 "transport_tos": 0, 00:15:49.558 "nvme_error_stat": false, 00:15:49.558 "rdma_srq_size": 0, 00:15:49.558 "io_path_stat": false, 00:15:49.558 "allow_accel_sequence": false, 00:15:49.558 "rdma_max_cq_size": 0, 00:15:49.558 "rdma_cm_event_timeout_ms": 0, 00:15:49.558 "dhchap_digests": [ 00:15:49.558 "sha256", 00:15:49.558 "sha384", 00:15:49.558 "sha512" 00:15:49.558 ], 00:15:49.558 "dhchap_dhgroups": [ 00:15:49.558 "null", 00:15:49.558 "ffdhe2048", 00:15:49.558 "ffdhe3072", 00:15:49.558 "ffdhe4096", 00:15:49.558 "ffdhe6144", 00:15:49.558 "ffdhe8192" 00:15:49.558 ] 00:15:49.558 } 00:15:49.558 }, 00:15:49.558 { 00:15:49.558 "method": "bdev_nvme_set_hotplug", 00:15:49.558 "params": { 00:15:49.558 "period_us": 100000, 00:15:49.558 "enable": false 00:15:49.558 } 00:15:49.558 }, 00:15:49.558 { 00:15:49.558 "method": "bdev_malloc_create", 00:15:49.558 "params": { 00:15:49.558 "name": "malloc0", 00:15:49.558 "num_blocks": 8192, 00:15:49.558 "block_size": 4096, 00:15:49.558 "physical_block_size": 4096, 00:15:49.558 "uuid": "dcc5a193-5ce1-44b7-b39e-57a50d37d1e9", 00:15:49.558 "optimal_io_boundary": 0, 00:15:49.558 "md_size": 0, 00:15:49.558 "dif_type": 0, 00:15:49.558 "dif_is_head_of_md": false, 00:15:49.558 "dif_pi_format": 0 00:15:49.558 } 00:15:49.558 }, 00:15:49.558 { 00:15:49.558 "method": "bdev_wait_for_examine" 00:15:49.558 } 00:15:49.558 ] 00:15:49.558 }, 00:15:49.558 { 00:15:49.558 "subsystem": "scsi", 00:15:49.558 "config": null 00:15:49.558 }, 00:15:49.558 { 00:15:49.558 "subsystem": "scheduler", 00:15:49.558 "config": [ 00:15:49.558 { 00:15:49.558 "method": "framework_set_scheduler", 00:15:49.558 "params": { 00:15:49.558 "name": "static" 00:15:49.558 } 00:15:49.558 } 00:15:49.558 ] 00:15:49.558 }, 00:15:49.558 { 00:15:49.558 "subsystem": "vhost_scsi", 00:15:49.558 "config": [] 00:15:49.558 }, 00:15:49.558 { 00:15:49.558 "subsystem": "vhost_blk", 00:15:49.558 "config": [] 00:15:49.558 }, 00:15:49.558 { 00:15:49.558 "subsystem": "ublk", 00:15:49.558 "config": [ 00:15:49.558 { 00:15:49.558 "method": "ublk_create_target", 00:15:49.558 "params": { 00:15:49.558 "cpumask": "1" 00:15:49.558 } 00:15:49.558 }, 00:15:49.558 { 00:15:49.558 "method": "ublk_start_disk", 00:15:49.558 "params": { 00:15:49.558 "bdev_name": "malloc0", 00:15:49.558 "ublk_id": 0, 00:15:49.558 "num_queues": 1, 00:15:49.558 "queue_depth": 128 00:15:49.558 } 00:15:49.558 } 00:15:49.558 ] 00:15:49.558 }, 00:15:49.558 { 00:15:49.558 "subsystem": "nbd", 00:15:49.558 "config": [] 00:15:49.558 }, 00:15:49.558 { 00:15:49.558 "subsystem": "nvmf", 00:15:49.558 "config": [ 00:15:49.558 { 00:15:49.558 "method": "nvmf_set_config", 00:15:49.558 "params": { 00:15:49.558 "discovery_filter": "match_any", 00:15:49.558 "admin_cmd_passthru": { 00:15:49.558 "identify_ctrlr": false 00:15:49.558 }, 00:15:49.558 "dhchap_digests": [ 00:15:49.558 "sha256", 00:15:49.558 "sha384", 00:15:49.558 "sha512" 00:15:49.558 ], 00:15:49.558 "dhchap_dhgroups": [ 00:15:49.558 "null", 00:15:49.558 "ffdhe2048", 00:15:49.558 "ffdhe3072", 00:15:49.558 "ffdhe4096", 00:15:49.558 "ffdhe6144", 00:15:49.558 "ffdhe8192" 00:15:49.558 ] 00:15:49.558 } 00:15:49.558 }, 00:15:49.558 { 00:15:49.558 "method": "nvmf_set_max_subsystems", 00:15:49.558 "params": { 00:15:49.558 "max_subsystems": 1024 00:15:49.558 } 00:15:49.558 }, 00:15:49.558 { 00:15:49.558 "method": "nvmf_set_crdt", 00:15:49.558 "params": { 00:15:49.558 "crdt1": 0, 00:15:49.558 "crdt2": 0, 00:15:49.558 "crdt3": 0 00:15:49.558 } 00:15:49.558 } 00:15:49.558 ] 00:15:49.558 }, 00:15:49.558 { 00:15:49.558 "subsystem": "iscsi", 00:15:49.558 "config": [ 00:15:49.558 { 00:15:49.558 "method": "iscsi_set_options", 00:15:49.558 "params": { 00:15:49.558 "node_base": "iqn.2016-06.io.spdk", 00:15:49.558 "max_sessions": 128, 00:15:49.558 "max_connections_per_session": 2, 00:15:49.558 "max_queue_depth": 64, 00:15:49.558 "default_time2wait": 2, 00:15:49.558 "default_time2retain": 20, 00:15:49.558 "first_burst_length": 8192, 00:15:49.558 "immediate_data": true, 00:15:49.558 "allow_duplicated_isid": false, 00:15:49.558 "error_recovery_level": 0, 00:15:49.558 "nop_timeout": 60, 00:15:49.558 "nop_in_interval": 30, 00:15:49.558 "disable_chap": false, 00:15:49.558 "require_chap": false, 00:15:49.558 "mutual_chap": false, 00:15:49.558 "chap_group": 0, 00:15:49.558 "max_large_datain_per_connection": 64, 00:15:49.558 "max_r2t_per_connection": 4, 00:15:49.558 "pdu_pool_size": 36864, 00:15:49.558 "immediate_data_pool_size": 16384, 00:15:49.558 "data_out_pool_size": 2048 00:15:49.558 } 00:15:49.558 } 00:15:49.558 ] 00:15:49.558 } 00:15:49.558 ] 00:15:49.558 }' 00:15:49.558 09:33:17 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 86522 00:15:49.558 09:33:17 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 86522 ']' 00:15:49.558 09:33:17 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 86522 00:15:49.558 09:33:17 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:15:49.558 09:33:17 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:49.558 09:33:17 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86522 00:15:49.821 killing process with pid 86522 00:15:49.821 09:33:17 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:49.821 09:33:17 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:49.821 09:33:17 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86522' 00:15:49.821 09:33:17 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 86522 00:15:49.821 09:33:17 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 86522 00:15:50.083 [2024-11-29 09:33:17.572027] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:50.083 [2024-11-29 09:33:17.603719] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:50.083 [2024-11-29 09:33:17.603854] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:50.083 [2024-11-29 09:33:17.610624] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:50.083 [2024-11-29 09:33:17.610693] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:50.083 [2024-11-29 09:33:17.610710] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:50.083 [2024-11-29 09:33:17.610737] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:50.083 [2024-11-29 09:33:17.610904] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:50.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.345 09:33:18 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=86555 00:15:50.345 09:33:18 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 86555 00:15:50.345 09:33:18 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 86555 ']' 00:15:50.345 09:33:18 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.345 09:33:18 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:50.345 09:33:18 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.345 09:33:18 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:15:50.345 "subsystems": [ 00:15:50.345 { 00:15:50.345 "subsystem": "fsdev", 00:15:50.345 "config": [ 00:15:50.345 { 00:15:50.345 "method": "fsdev_set_opts", 00:15:50.345 "params": { 00:15:50.345 "fsdev_io_pool_size": 65535, 00:15:50.345 "fsdev_io_cache_size": 256 00:15:50.345 } 00:15:50.345 } 00:15:50.345 ] 00:15:50.345 }, 00:15:50.345 { 00:15:50.345 "subsystem": "keyring", 00:15:50.345 "config": [] 00:15:50.345 }, 00:15:50.345 { 00:15:50.345 "subsystem": "iobuf", 00:15:50.345 "config": [ 00:15:50.345 { 00:15:50.345 "method": "iobuf_set_options", 00:15:50.345 "params": { 00:15:50.345 "small_pool_count": 8192, 00:15:50.345 "large_pool_count": 1024, 00:15:50.345 "small_bufsize": 8192, 00:15:50.345 "large_bufsize": 135168, 00:15:50.345 "enable_numa": false 00:15:50.345 } 00:15:50.345 } 00:15:50.345 ] 00:15:50.345 }, 00:15:50.345 { 00:15:50.345 "subsystem": "sock", 00:15:50.345 "config": [ 00:15:50.345 { 00:15:50.345 "method": "sock_set_default_impl", 00:15:50.345 "params": { 00:15:50.345 "impl_name": "posix" 00:15:50.345 } 00:15:50.345 }, 00:15:50.345 { 00:15:50.345 "method": "sock_impl_set_options", 00:15:50.345 "params": { 00:15:50.345 "impl_name": "ssl", 00:15:50.345 "recv_buf_size": 4096, 00:15:50.345 "send_buf_size": 4096, 00:15:50.345 "enable_recv_pipe": true, 00:15:50.345 "enable_quickack": false, 00:15:50.345 "enable_placement_id": 0, 00:15:50.345 "enable_zerocopy_send_server": true, 00:15:50.345 "enable_zerocopy_send_client": false, 00:15:50.345 "zerocopy_threshold": 0, 00:15:50.345 "tls_version": 0, 00:15:50.345 "enable_ktls": false 00:15:50.345 } 00:15:50.345 }, 00:15:50.345 { 00:15:50.345 "method": "sock_impl_set_options", 00:15:50.345 "params": { 00:15:50.345 "impl_name": "posix", 00:15:50.345 "recv_buf_size": 2097152, 00:15:50.345 "send_buf_size": 2097152, 00:15:50.345 "enable_recv_pipe": true, 00:15:50.345 "enable_quickack": false, 00:15:50.345 "enable_placement_id": 0, 00:15:50.345 "enable_zerocopy_send_server": true, 00:15:50.345 "enable_zerocopy_send_client": false, 00:15:50.345 "zerocopy_threshold": 0, 00:15:50.345 "tls_version": 0, 00:15:50.345 "enable_ktls": false 00:15:50.345 } 00:15:50.345 } 00:15:50.345 ] 00:15:50.345 }, 00:15:50.345 { 00:15:50.345 "subsystem": "vmd", 00:15:50.345 "config": [] 00:15:50.345 }, 00:15:50.345 { 00:15:50.345 "subsystem": "accel", 00:15:50.345 "config": [ 00:15:50.345 { 00:15:50.345 "method": "accel_set_options", 00:15:50.345 "params": { 00:15:50.345 "small_cache_size": 128, 00:15:50.345 "large_cache_size": 16, 00:15:50.345 "task_count": 2048, 00:15:50.345 "sequence_count": 2048, 00:15:50.345 "buf_count": 2048 00:15:50.345 } 00:15:50.345 } 00:15:50.345 ] 00:15:50.345 }, 00:15:50.345 { 00:15:50.345 "subsystem": "bdev", 00:15:50.345 "config": [ 00:15:50.345 { 00:15:50.345 "method": "bdev_set_options", 00:15:50.345 "params": { 00:15:50.345 "bdev_io_pool_size": 65535, 00:15:50.345 "bdev_io_cache_size": 256, 00:15:50.345 "bdev_auto_examine": true, 00:15:50.345 "iobuf_small_cache_size": 128, 00:15:50.345 "iobuf_large_cache_size": 16 00:15:50.345 } 00:15:50.345 }, 00:15:50.345 { 00:15:50.345 "method": "bdev_raid_set_options", 00:15:50.345 "params": { 00:15:50.345 "process_window_size_kb": 1024, 00:15:50.345 "process_max_bandwidth_mb_sec": 0 00:15:50.345 } 00:15:50.345 }, 00:15:50.345 { 00:15:50.345 "method": "bdev_iscsi_set_options", 00:15:50.345 "params": { 00:15:50.345 "timeout_sec": 30 00:15:50.345 } 00:15:50.345 }, 00:15:50.345 { 00:15:50.345 "method": "bdev_nvme_set_options", 00:15:50.345 "params": { 00:15:50.345 "action_on_timeout": "none", 00:15:50.345 "timeout_us": 0, 00:15:50.345 "timeout_admin_us": 0, 00:15:50.345 "keep_alive_timeout_ms": 10000, 00:15:50.345 "arbitration_burst": 0, 00:15:50.345 "low_priority_weight": 0, 00:15:50.345 "medium_priority_weight": 0, 00:15:50.346 "high_priority_weight": 0, 00:15:50.346 "nvme_adminq_poll_period_us": 10000, 00:15:50.346 "nvme_ioq_poll_period_us": 0, 00:15:50.346 "io_queue_requests": 0, 00:15:50.346 "delay_cmd_submit": true, 00:15:50.346 "transport_retry_count": 4, 00:15:50.346 "bdev_retry_count": 3, 00:15:50.346 "transport_ack_timeout": 0, 00:15:50.346 "ctrlr_loss_timeout_sec": 0, 00:15:50.346 "reconnect_delay_sec": 0, 00:15:50.346 "fast_io_fail_timeout_sec": 0, 00:15:50.346 "disable_auto_failback": false, 00:15:50.346 "generate_uuids": false, 00:15:50.346 "transport_tos": 0, 00:15:50.346 "nvme_error_stat": false, 00:15:50.346 "rdma_srq_size": 0, 00:15:50.346 "io_path_stat": false, 00:15:50.346 "allow_accel_sequence": false, 00:15:50.346 "rdma_max_cq_size": 0, 00:15:50.346 "rdma_cm_event_timeout_ms": 0, 00:15:50.346 "dhchap_digests": [ 00:15:50.346 "sha256", 00:15:50.346 "sha384", 00:15:50.346 "sha512" 00:15:50.346 ], 00:15:50.346 "dhchap_dhgroups": [ 00:15:50.346 "null", 00:15:50.346 "ffdhe2048", 00:15:50.346 "ffdhe3072", 00:15:50.346 "ffdhe4096", 00:15:50.346 "ffdhe6144", 00:15:50.346 "ffdhe8192" 00:15:50.346 ] 00:15:50.346 } 00:15:50.346 }, 00:15:50.346 { 00:15:50.346 "method": "bdev_nvme_set_hotplug", 00:15:50.346 "params": { 00:15:50.346 "period_us": 100000, 00:15:50.346 "enable": false 00:15:50.346 } 00:15:50.346 }, 00:15:50.346 { 00:15:50.346 "method": "bdev_malloc_create", 00:15:50.346 "params": { 00:15:50.346 "name": "malloc0", 00:15:50.346 "num_blocks": 8192, 00:15:50.346 "block_size": 4096, 00:15:50.346 "physical_block_size": 4096, 00:15:50.346 "uuid": "dcc5a193-5ce1-44b7-b39e-57a50d37d1e9", 00:15:50.346 "optimal_io_boundary": 0, 00:15:50.346 "md_size": 0, 00:15:50.346 "dif_type": 0, 00:15:50.346 "dif_is_head_of_md": false, 00:15:50.346 "dif_pi_format": 0 00:15:50.346 } 00:15:50.346 }, 00:15:50.346 { 00:15:50.346 "method": "bdev_wait_for_examine" 00:15:50.346 } 00:15:50.346 ] 00:15:50.346 }, 00:15:50.346 { 00:15:50.346 "subsystem": "scsi", 00:15:50.346 "config": null 00:15:50.346 }, 00:15:50.346 { 00:15:50.346 "subsystem": "scheduler", 00:15:50.346 "config": [ 00:15:50.346 { 00:15:50.346 "method": "framework_set_scheduler", 00:15:50.346 "params": { 00:15:50.346 "name": "static" 00:15:50.346 } 00:15:50.346 } 00:15:50.346 ] 00:15:50.346 }, 00:15:50.346 { 00:15:50.346 "subsystem": "vhost_scsi", 00:15:50.346 "config": [] 00:15:50.346 }, 00:15:50.346 { 00:15:50.346 "subsystem": "vhost_blk", 00:15:50.346 "config": [] 00:15:50.346 }, 00:15:50.346 { 00:15:50.346 "subsystem": "ublk", 00:15:50.346 "config": [ 00:15:50.346 { 00:15:50.346 "method": "ublk_create_target", 00:15:50.346 "params": { 00:15:50.346 "cpumask": "1" 00:15:50.346 } 00:15:50.346 }, 00:15:50.346 { 00:15:50.346 "method": "ublk_start_disk", 00:15:50.346 "params": { 00:15:50.346 "bdev_name": "malloc0", 00:15:50.346 "ublk_id": 0, 00:15:50.346 "num_queues": 1, 00:15:50.346 "queue_depth": 128 00:15:50.346 } 00:15:50.346 } 00:15:50.346 ] 00:15:50.346 }, 00:15:50.346 { 00:15:50.346 "subsystem": "nbd", 00:15:50.346 "config": [] 00:15:50.346 }, 00:15:50.346 { 00:15:50.346 "subsystem": "nvmf", 00:15:50.346 "config": [ 00:15:50.346 { 00:15:50.346 "method": "nvmf_set_config", 00:15:50.346 "params": { 00:15:50.346 "discovery_filter": "match_any", 00:15:50.346 "admin_cmd_passthru": { 00:15:50.346 "identify_ctrlr": false 00:15:50.346 }, 00:15:50.346 "dhchap_digests": [ 00:15:50.346 "sha256", 00:15:50.346 "sha384", 00:15:50.346 "sha512" 00:15:50.346 ], 00:15:50.346 "dhchap_dhgroups": [ 00:15:50.346 "null", 00:15:50.346 "ffdhe2048", 00:15:50.346 "ffdhe3072", 00:15:50.346 "ffdhe4096", 00:15:50.346 "ffdhe6144", 00:15:50.346 "ffdhe8192" 00:15:50.346 ] 00:15:50.346 } 00:15:50.346 }, 00:15:50.346 { 00:15:50.346 "method": "nvmf_set_max_subsystems", 00:15:50.346 "params": { 00:15:50.346 "max_subsystems": 1024 00:15:50.346 } 00:15:50.346 }, 00:15:50.346 { 00:15:50.346 "method": "nvmf_set_crdt", 00:15:50.346 "params": { 00:15:50.346 "crdt1": 0, 00:15:50.346 "crdt2": 0, 00:15:50.346 "crdt3": 0 00:15:50.346 } 00:15:50.346 } 00:15:50.346 ] 00:15:50.346 }, 00:15:50.346 { 00:15:50.346 "subsystem": "iscsi", 00:15:50.346 "config": [ 00:15:50.346 { 00:15:50.346 "method": "iscsi_set_options", 00:15:50.346 "params": { 00:15:50.346 "node_base": "iqn.2016-06.io.spdk", 00:15:50.346 "max_sessions": 128, 00:15:50.346 "max_connections_per_session": 2, 00:15:50.346 "max_queue_depth": 64, 00:15:50.346 "default_time2wait": 2, 00:15:50.346 "default_time2retain": 20, 00:15:50.346 "first_burst_length": 8192, 00:15:50.346 "immediate_data": true, 00:15:50.346 "allow_duplicated_isid": false, 00:15:50.346 "error_recovery_level": 0, 00:15:50.346 "nop_timeout": 60, 00:15:50.346 "nop_in_interval": 30, 00:15:50.346 "disable_chap": false, 00:15:50.346 "require_chap": false, 00:15:50.346 "mutual_chap": false, 00:15:50.346 "chap_group": 0, 00:15:50.346 "max_large_datain_per_connection": 64, 00:15:50.346 "max_r2t_per_connection": 4, 00:15:50.346 "pdu_pool_size": 36864, 00:15:50.346 "immediate_data_pool_size": 16384, 00:15:50.346 "data_out_pool_size": 2048 00:15:50.346 } 00:15:50.346 } 00:15:50.346 ] 00:15:50.346 } 00:15:50.346 ] 00:15:50.346 }' 00:15:50.346 09:33:18 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:15:50.346 09:33:18 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:50.346 09:33:18 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:50.607 [2024-11-29 09:33:18.144808] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:15:50.607 [2024-11-29 09:33:18.145149] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86555 ] 00:15:50.607 [2024-11-29 09:33:18.282937] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:50.607 [2024-11-29 09:33:18.312834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.868 [2024-11-29 09:33:18.341576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.129 [2024-11-29 09:33:18.730607] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:51.129 [2024-11-29 09:33:18.730963] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:51.129 [2024-11-29 09:33:18.738739] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:15:51.129 [2024-11-29 09:33:18.738831] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:15:51.129 [2024-11-29 09:33:18.738842] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:51.130 [2024-11-29 09:33:18.738851] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:51.130 [2024-11-29 09:33:18.747694] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:51.130 [2024-11-29 09:33:18.747729] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:51.130 [2024-11-29 09:33:18.754623] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:51.130 [2024-11-29 09:33:18.754733] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:51.130 [2024-11-29 09:33:18.771620] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:51.391 09:33:18 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:51.391 09:33:18 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:15:51.391 09:33:18 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:15:51.391 09:33:19 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.391 09:33:18 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:15:51.391 09:33:19 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:51.391 09:33:19 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.391 09:33:19 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:51.391 09:33:19 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:15:51.391 09:33:19 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 86555 00:15:51.391 09:33:19 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 86555 ']' 00:15:51.391 09:33:19 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 86555 00:15:51.391 09:33:19 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:15:51.391 09:33:19 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:51.391 09:33:19 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86555 00:15:51.391 killing process with pid 86555 00:15:51.391 09:33:19 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:51.391 09:33:19 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:51.391 09:33:19 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86555' 00:15:51.391 09:33:19 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 86555 00:15:51.391 09:33:19 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 86555 00:15:51.652 [2024-11-29 09:33:19.341071] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:51.652 [2024-11-29 09:33:19.371719] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:51.652 [2024-11-29 09:33:19.371862] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:51.914 [2024-11-29 09:33:19.378623] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:51.914 [2024-11-29 09:33:19.378691] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:51.914 [2024-11-29 09:33:19.378700] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:51.914 [2024-11-29 09:33:19.378729] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:51.914 [2024-11-29 09:33:19.378885] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:52.175 09:33:19 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:15:52.175 ************************************ 00:15:52.175 END TEST test_save_ublk_config 00:15:52.175 ************************************ 00:15:52.175 00:15:52.175 real 0m3.857s 00:15:52.175 user 0m2.638s 00:15:52.175 sys 0m1.894s 00:15:52.175 09:33:19 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:52.175 09:33:19 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:52.175 09:33:19 ublk -- ublk/ublk.sh@139 -- # spdk_pid=86611 00:15:52.175 09:33:19 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:52.175 09:33:19 ublk -- ublk/ublk.sh@141 -- # waitforlisten 86611 00:15:52.175 09:33:19 ublk -- common/autotest_common.sh@835 -- # '[' -z 86611 ']' 00:15:52.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.175 09:33:19 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.175 09:33:19 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:52.175 09:33:19 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:52.175 09:33:19 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.175 09:33:19 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:52.175 09:33:19 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:52.436 [2024-11-29 09:33:19.969828] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:15:52.436 [2024-11-29 09:33:19.970601] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86611 ] 00:15:52.436 [2024-11-29 09:33:20.108789] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:52.436 [2024-11-29 09:33:20.137224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:52.698 [2024-11-29 09:33:20.167859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:52.698 [2024-11-29 09:33:20.167902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.272 09:33:20 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:53.272 09:33:20 ublk -- common/autotest_common.sh@868 -- # return 0 00:15:53.272 09:33:20 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:15:53.272 09:33:20 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:53.272 09:33:20 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:53.272 09:33:20 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:53.272 ************************************ 00:15:53.272 START TEST test_create_ublk 00:15:53.272 ************************************ 00:15:53.272 09:33:20 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:15:53.272 09:33:20 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:15:53.272 09:33:20 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.272 09:33:20 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:53.272 [2024-11-29 09:33:20.835615] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:53.272 [2024-11-29 09:33:20.837309] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:53.272 09:33:20 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.272 09:33:20 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:15:53.272 09:33:20 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:15:53.272 09:33:20 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.272 09:33:20 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:53.272 09:33:20 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.272 09:33:20 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:15:53.272 09:33:20 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:15:53.272 09:33:20 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.272 09:33:20 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:53.272 [2024-11-29 09:33:20.932782] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:15:53.272 [2024-11-29 09:33:20.933229] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:15:53.272 [2024-11-29 09:33:20.933258] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:53.272 [2024-11-29 09:33:20.933266] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:53.272 [2024-11-29 09:33:20.943651] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:53.272 [2024-11-29 09:33:20.943682] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:53.272 [2024-11-29 09:33:20.951624] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:53.272 [2024-11-29 09:33:20.952337] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:53.272 [2024-11-29 09:33:20.973122] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:53.272 09:33:20 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.272 09:33:20 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:15:53.272 09:33:20 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:15:53.272 09:33:20 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:15:53.272 09:33:20 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.272 09:33:20 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:53.533 09:33:20 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.533 09:33:20 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:15:53.533 { 00:15:53.533 "ublk_device": "/dev/ublkb0", 00:15:53.533 "id": 0, 00:15:53.533 "queue_depth": 512, 00:15:53.533 "num_queues": 4, 00:15:53.533 "bdev_name": "Malloc0" 00:15:53.533 } 00:15:53.533 ]' 00:15:53.533 09:33:21 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:15:53.533 09:33:21 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:53.533 09:33:21 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:15:53.534 09:33:21 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:15:53.534 09:33:21 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:15:53.534 09:33:21 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:15:53.534 09:33:21 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:15:53.534 09:33:21 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:15:53.534 09:33:21 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:15:53.534 09:33:21 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:15:53.534 09:33:21 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:15:53.534 09:33:21 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:15:53.534 09:33:21 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:15:53.534 09:33:21 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:15:53.534 09:33:21 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:15:53.534 09:33:21 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:15:53.534 09:33:21 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:15:53.534 09:33:21 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:15:53.534 09:33:21 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:15:53.534 09:33:21 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:53.534 09:33:21 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:53.534 09:33:21 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:15:53.832 fio: verification read phase will never start because write phase uses all of runtime 00:15:53.832 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:15:53.832 fio-3.35 00:15:53.833 Starting 1 process 00:16:03.832 00:16:03.832 fio_test: (groupid=0, jobs=1): err= 0: pid=86650: Fri Nov 29 09:33:31 2024 00:16:03.832 write: IOPS=14.7k, BW=57.3MiB/s (60.1MB/s)(573MiB/10001msec); 0 zone resets 00:16:03.832 clat (usec): min=35, max=9837, avg=67.41, stdev=124.44 00:16:03.832 lat (usec): min=36, max=9846, avg=67.82, stdev=124.47 00:16:03.832 clat percentiles (usec): 00:16:03.832 | 1.00th=[ 45], 5.00th=[ 49], 10.00th=[ 55], 20.00th=[ 59], 00:16:03.832 | 30.00th=[ 60], 40.00th=[ 62], 50.00th=[ 63], 60.00th=[ 64], 00:16:03.832 | 70.00th=[ 66], 80.00th=[ 68], 90.00th=[ 71], 95.00th=[ 74], 00:16:03.832 | 99.00th=[ 87], 99.50th=[ 95], 99.90th=[ 2671], 99.95th=[ 3425], 00:16:03.832 | 99.99th=[ 3818] 00:16:03.832 bw ( KiB/s): min=31672, max=65880, per=99.95%, avg=58687.58, stdev=6873.22, samples=19 00:16:03.832 iops : min= 7918, max=16470, avg=14671.89, stdev=1718.31, samples=19 00:16:03.832 lat (usec) : 50=6.78%, 100=92.81%, 250=0.20%, 500=0.02%, 750=0.01% 00:16:03.832 lat (usec) : 1000=0.01% 00:16:03.832 lat (msec) : 2=0.04%, 4=0.12%, 10=0.01% 00:16:03.832 cpu : usr=2.72%, sys=10.97%, ctx=146802, majf=0, minf=796 00:16:03.832 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:03.832 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.832 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.832 issued rwts: total=0,146802,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:03.832 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:03.832 00:16:03.832 Run status group 0 (all jobs): 00:16:03.832 WRITE: bw=57.3MiB/s (60.1MB/s), 57.3MiB/s-57.3MiB/s (60.1MB/s-60.1MB/s), io=573MiB (601MB), run=10001-10001msec 00:16:03.832 00:16:03.832 Disk stats (read/write): 00:16:03.832 ublkb0: ios=0/145236, merge=0/0, ticks=0/8597, in_queue=8598, util=99.02% 00:16:03.832 09:33:31 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:16:03.832 09:33:31 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.832 09:33:31 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:03.832 [2024-11-29 09:33:31.394854] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:03.832 [2024-11-29 09:33:31.438127] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:03.832 [2024-11-29 09:33:31.439058] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:03.832 [2024-11-29 09:33:31.444614] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:03.832 [2024-11-29 09:33:31.444879] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:03.832 [2024-11-29 09:33:31.444892] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:03.832 09:33:31 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.832 09:33:31 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:16:03.832 09:33:31 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:16:03.832 09:33:31 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:16:03.832 09:33:31 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:03.832 09:33:31 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:03.832 09:33:31 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:03.832 09:33:31 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:03.832 09:33:31 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:16:03.832 09:33:31 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.832 09:33:31 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:03.832 [2024-11-29 09:33:31.462698] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:16:03.832 request: 00:16:03.832 { 00:16:03.832 "ublk_id": 0, 00:16:03.832 "method": "ublk_stop_disk", 00:16:03.832 "req_id": 1 00:16:03.832 } 00:16:03.832 Got JSON-RPC error response 00:16:03.832 response: 00:16:03.832 { 00:16:03.832 "code": -19, 00:16:03.832 "message": "No such device" 00:16:03.832 } 00:16:03.832 09:33:31 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:03.832 09:33:31 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:16:03.832 09:33:31 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:03.832 09:33:31 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:03.832 09:33:31 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:03.832 09:33:31 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:16:03.832 09:33:31 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.832 09:33:31 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:03.832 [2024-11-29 09:33:31.476663] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:03.832 [2024-11-29 09:33:31.477912] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:03.832 [2024-11-29 09:33:31.477941] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:03.832 09:33:31 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.832 09:33:31 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:03.832 09:33:31 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.832 09:33:31 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:03.832 09:33:31 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.832 09:33:31 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:16:03.832 09:33:31 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:03.832 09:33:31 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.832 09:33:31 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:03.832 09:33:31 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.832 09:33:31 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:03.832 09:33:31 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:16:04.091 09:33:31 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:04.091 09:33:31 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:04.091 09:33:31 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.091 09:33:31 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:04.091 09:33:31 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.091 09:33:31 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:04.091 09:33:31 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:16:04.091 09:33:31 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:04.091 00:16:04.091 real 0m10.814s 00:16:04.091 user 0m0.580s 00:16:04.091 sys 0m1.180s 00:16:04.091 09:33:31 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:04.091 ************************************ 00:16:04.091 09:33:31 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:04.091 END TEST test_create_ublk 00:16:04.091 ************************************ 00:16:04.091 09:33:31 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:16:04.091 09:33:31 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:04.091 09:33:31 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:04.091 09:33:31 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:04.091 ************************************ 00:16:04.091 START TEST test_create_multi_ublk 00:16:04.091 ************************************ 00:16:04.091 09:33:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:16:04.091 09:33:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:16:04.091 09:33:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.091 09:33:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:04.091 [2024-11-29 09:33:31.699595] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:04.091 [2024-11-29 09:33:31.700463] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:04.091 09:33:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.091 09:33:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:16:04.091 09:33:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:16:04.091 09:33:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:04.091 09:33:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:16:04.091 09:33:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.091 09:33:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:04.091 09:33:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.091 09:33:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:16:04.091 09:33:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:04.092 09:33:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.092 09:33:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:04.092 [2024-11-29 09:33:31.771720] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:04.092 [2024-11-29 09:33:31.772017] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:04.092 [2024-11-29 09:33:31.772024] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:04.092 [2024-11-29 09:33:31.772030] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:04.092 [2024-11-29 09:33:31.783639] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:04.092 [2024-11-29 09:33:31.783660] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:04.092 [2024-11-29 09:33:31.795618] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:04.092 [2024-11-29 09:33:31.796098] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:04.350 [2024-11-29 09:33:31.821608] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:04.350 09:33:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.350 09:33:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:16:04.350 09:33:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:04.350 09:33:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:16:04.351 09:33:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.351 09:33:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:04.351 09:33:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.351 09:33:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:16:04.351 09:33:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:16:04.351 09:33:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.351 09:33:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:04.351 [2024-11-29 09:33:31.905696] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:16:04.351 [2024-11-29 09:33:31.905992] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:16:04.351 [2024-11-29 09:33:31.906004] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:04.351 [2024-11-29 09:33:31.906009] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:04.351 [2024-11-29 09:33:31.917643] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:04.351 [2024-11-29 09:33:31.917659] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:04.351 [2024-11-29 09:33:31.929612] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:04.351 [2024-11-29 09:33:31.930087] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:04.351 [2024-11-29 09:33:31.954615] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:04.351 09:33:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.351 09:33:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:16:04.351 09:33:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:04.351 09:33:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:16:04.351 09:33:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.351 09:33:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:04.351 09:33:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.351 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:16:04.351 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:16:04.351 09:33:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.351 09:33:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:04.351 [2024-11-29 09:33:32.037696] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:16:04.351 [2024-11-29 09:33:32.037995] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:16:04.351 [2024-11-29 09:33:32.038006] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:16:04.351 [2024-11-29 09:33:32.038012] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:16:04.351 [2024-11-29 09:33:32.049624] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:04.351 [2024-11-29 09:33:32.049643] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:04.351 [2024-11-29 09:33:32.061609] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:04.351 [2024-11-29 09:33:32.062085] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:16:04.610 [2024-11-29 09:33:32.086624] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:16:04.610 09:33:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.610 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:16:04.610 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:04.610 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:16:04.610 09:33:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.610 09:33:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:04.610 09:33:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.610 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:16:04.610 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:16:04.610 09:33:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.610 09:33:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:04.610 [2024-11-29 09:33:32.168696] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:16:04.610 [2024-11-29 09:33:32.168986] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:16:04.610 [2024-11-29 09:33:32.168999] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:16:04.610 [2024-11-29 09:33:32.169004] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:16:04.610 [2024-11-29 09:33:32.180634] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:04.610 [2024-11-29 09:33:32.180650] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:04.610 [2024-11-29 09:33:32.192613] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:04.610 [2024-11-29 09:33:32.193087] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:16:04.610 [2024-11-29 09:33:32.205630] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:16:04.610 09:33:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.610 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:16:04.610 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:16:04.610 09:33:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.610 09:33:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:04.610 09:33:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.610 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:16:04.610 { 00:16:04.610 "ublk_device": "/dev/ublkb0", 00:16:04.610 "id": 0, 00:16:04.610 "queue_depth": 512, 00:16:04.610 "num_queues": 4, 00:16:04.610 "bdev_name": "Malloc0" 00:16:04.610 }, 00:16:04.610 { 00:16:04.610 "ublk_device": "/dev/ublkb1", 00:16:04.610 "id": 1, 00:16:04.610 "queue_depth": 512, 00:16:04.610 "num_queues": 4, 00:16:04.610 "bdev_name": "Malloc1" 00:16:04.610 }, 00:16:04.610 { 00:16:04.610 "ublk_device": "/dev/ublkb2", 00:16:04.610 "id": 2, 00:16:04.610 "queue_depth": 512, 00:16:04.610 "num_queues": 4, 00:16:04.610 "bdev_name": "Malloc2" 00:16:04.610 }, 00:16:04.610 { 00:16:04.610 "ublk_device": "/dev/ublkb3", 00:16:04.610 "id": 3, 00:16:04.610 "queue_depth": 512, 00:16:04.610 "num_queues": 4, 00:16:04.610 "bdev_name": "Malloc3" 00:16:04.610 } 00:16:04.610 ]' 00:16:04.610 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:16:04.610 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:04.610 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:16:04.610 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:04.610 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:16:04.610 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:16:04.610 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:16:04.868 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:04.868 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:16:04.868 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:04.868 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:16:04.868 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:04.868 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:04.868 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:16:04.868 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:16:04.868 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:16:04.868 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:16:04.868 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:16:04.869 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:04.869 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:16:04.869 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:04.869 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:16:04.869 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:16:04.869 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:04.869 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:16:05.127 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:16:05.127 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:16:05.127 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:16:05.127 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:16:05.127 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:05.127 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:16:05.127 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:05.127 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:16:05.128 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:16:05.128 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:05.128 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:16:05.128 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:16:05.128 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:16:05.128 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:16:05.128 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:16:05.128 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:05.128 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:16:05.128 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:05.128 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:16:05.387 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:16:05.387 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:16:05.387 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:16:05.387 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:05.387 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:16:05.387 09:33:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.387 09:33:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:05.387 [2024-11-29 09:33:32.890692] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:05.387 [2024-11-29 09:33:32.938652] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:05.387 [2024-11-29 09:33:32.939500] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:05.387 [2024-11-29 09:33:32.946632] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:05.387 [2024-11-29 09:33:32.946874] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:05.387 [2024-11-29 09:33:32.946888] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:05.387 09:33:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.387 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:05.387 09:33:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:16:05.387 09:33:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.387 09:33:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:05.387 [2024-11-29 09:33:32.962681] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:05.387 [2024-11-29 09:33:32.992113] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:05.387 [2024-11-29 09:33:32.993217] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:05.387 [2024-11-29 09:33:33.002620] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:05.387 [2024-11-29 09:33:33.002861] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:05.387 [2024-11-29 09:33:33.002876] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:05.387 09:33:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.387 09:33:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:05.387 09:33:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:16:05.387 09:33:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.387 09:33:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:05.387 [2024-11-29 09:33:33.017654] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:16:05.387 [2024-11-29 09:33:33.062646] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:05.387 [2024-11-29 09:33:33.063387] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:16:05.387 [2024-11-29 09:33:33.070617] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:05.387 [2024-11-29 09:33:33.070849] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:16:05.387 [2024-11-29 09:33:33.070862] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:16:05.387 09:33:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.387 09:33:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:05.387 09:33:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:16:05.387 09:33:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.387 09:33:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:05.387 [2024-11-29 09:33:33.086673] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:16:05.646 [2024-11-29 09:33:33.126109] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:05.646 [2024-11-29 09:33:33.127050] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:16:05.646 [2024-11-29 09:33:33.132624] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:05.646 [2024-11-29 09:33:33.132858] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:16:05.646 [2024-11-29 09:33:33.132870] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:16:05.646 09:33:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.646 09:33:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:16:05.646 [2024-11-29 09:33:33.332665] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:05.646 [2024-11-29 09:33:33.333921] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:05.646 [2024-11-29 09:33:33.333947] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:05.646 09:33:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:16:05.646 09:33:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:05.646 09:33:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:05.646 09:33:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.646 09:33:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:05.905 09:33:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.905 09:33:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:05.905 09:33:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:16:05.905 09:33:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.905 09:33:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:05.905 09:33:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.905 09:33:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:05.905 09:33:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:16:05.905 09:33:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.905 09:33:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:05.905 09:33:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.905 09:33:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:05.905 09:33:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:16:05.905 09:33:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.905 09:33:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:05.905 09:33:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.905 09:33:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:16:05.905 09:33:33 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:05.905 09:33:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.905 09:33:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:05.905 09:33:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.905 09:33:33 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:05.905 09:33:33 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:16:06.164 09:33:33 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:06.164 09:33:33 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:06.164 09:33:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.164 09:33:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:06.164 09:33:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.164 09:33:33 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:06.164 09:33:33 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:16:06.164 ************************************ 00:16:06.164 END TEST test_create_multi_ublk 00:16:06.164 ************************************ 00:16:06.164 09:33:33 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:06.164 00:16:06.164 real 0m1.997s 00:16:06.164 user 0m0.827s 00:16:06.164 sys 0m0.148s 00:16:06.164 09:33:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:06.164 09:33:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:06.164 09:33:33 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:16:06.164 09:33:33 ublk -- ublk/ublk.sh@147 -- # cleanup 00:16:06.164 09:33:33 ublk -- ublk/ublk.sh@130 -- # killprocess 86611 00:16:06.164 09:33:33 ublk -- common/autotest_common.sh@954 -- # '[' -z 86611 ']' 00:16:06.164 09:33:33 ublk -- common/autotest_common.sh@958 -- # kill -0 86611 00:16:06.164 09:33:33 ublk -- common/autotest_common.sh@959 -- # uname 00:16:06.164 09:33:33 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:06.164 09:33:33 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86611 00:16:06.164 killing process with pid 86611 00:16:06.164 09:33:33 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:06.164 09:33:33 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:06.164 09:33:33 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86611' 00:16:06.164 09:33:33 ublk -- common/autotest_common.sh@973 -- # kill 86611 00:16:06.164 09:33:33 ublk -- common/autotest_common.sh@978 -- # wait 86611 00:16:06.422 [2024-11-29 09:33:33.891822] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:06.422 [2024-11-29 09:33:33.891879] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:06.682 00:16:06.682 real 0m18.383s 00:16:06.682 user 0m28.470s 00:16:06.682 sys 0m7.417s 00:16:06.682 09:33:34 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:06.682 ************************************ 00:16:06.682 END TEST ublk 00:16:06.682 ************************************ 00:16:06.682 09:33:34 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:06.682 09:33:34 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:06.682 09:33:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:06.682 09:33:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:06.682 09:33:34 -- common/autotest_common.sh@10 -- # set +x 00:16:06.682 ************************************ 00:16:06.682 START TEST ublk_recovery 00:16:06.682 ************************************ 00:16:06.682 09:33:34 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:06.682 * Looking for test storage... 00:16:06.682 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:06.682 09:33:34 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:06.682 09:33:34 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:06.682 09:33:34 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:16:06.682 09:33:34 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:06.682 09:33:34 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:06.682 09:33:34 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:06.682 09:33:34 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:06.682 09:33:34 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:16:06.682 09:33:34 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:16:06.683 09:33:34 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:16:06.683 09:33:34 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:16:06.683 09:33:34 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:16:06.683 09:33:34 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:16:06.683 09:33:34 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:16:06.683 09:33:34 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:06.683 09:33:34 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:16:06.683 09:33:34 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:16:06.683 09:33:34 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:06.683 09:33:34 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:06.683 09:33:34 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:16:06.683 09:33:34 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:16:06.683 09:33:34 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:06.683 09:33:34 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:16:06.683 09:33:34 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:16:06.683 09:33:34 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:16:06.683 09:33:34 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:16:06.683 09:33:34 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:06.683 09:33:34 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:16:06.683 09:33:34 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:16:06.683 09:33:34 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:06.683 09:33:34 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:06.683 09:33:34 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:16:06.683 09:33:34 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:06.683 09:33:34 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:06.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.683 --rc genhtml_branch_coverage=1 00:16:06.683 --rc genhtml_function_coverage=1 00:16:06.683 --rc genhtml_legend=1 00:16:06.683 --rc geninfo_all_blocks=1 00:16:06.683 --rc geninfo_unexecuted_blocks=1 00:16:06.683 00:16:06.683 ' 00:16:06.683 09:33:34 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:06.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.683 --rc genhtml_branch_coverage=1 00:16:06.683 --rc genhtml_function_coverage=1 00:16:06.683 --rc genhtml_legend=1 00:16:06.683 --rc geninfo_all_blocks=1 00:16:06.683 --rc geninfo_unexecuted_blocks=1 00:16:06.683 00:16:06.683 ' 00:16:06.683 09:33:34 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:06.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.683 --rc genhtml_branch_coverage=1 00:16:06.683 --rc genhtml_function_coverage=1 00:16:06.683 --rc genhtml_legend=1 00:16:06.683 --rc geninfo_all_blocks=1 00:16:06.683 --rc geninfo_unexecuted_blocks=1 00:16:06.683 00:16:06.683 ' 00:16:06.683 09:33:34 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:06.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.683 --rc genhtml_branch_coverage=1 00:16:06.683 --rc genhtml_function_coverage=1 00:16:06.683 --rc genhtml_legend=1 00:16:06.683 --rc geninfo_all_blocks=1 00:16:06.683 --rc geninfo_unexecuted_blocks=1 00:16:06.683 00:16:06.683 ' 00:16:06.683 09:33:34 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:06.683 09:33:34 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:06.683 09:33:34 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:06.683 09:33:34 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:06.683 09:33:34 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:06.683 09:33:34 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:06.683 09:33:34 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:06.683 09:33:34 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:06.683 09:33:34 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:06.683 09:33:34 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:16:06.683 09:33:34 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=86973 00:16:06.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:06.683 09:33:34 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:06.683 09:33:34 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 86973 00:16:06.683 09:33:34 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 86973 ']' 00:16:06.683 09:33:34 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:06.683 09:33:34 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:06.683 09:33:34 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:06.683 09:33:34 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:06.683 09:33:34 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:06.683 09:33:34 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:06.943 [2024-11-29 09:33:34.435922] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:16:06.943 [2024-11-29 09:33:34.436791] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86973 ] 00:16:06.943 [2024-11-29 09:33:34.574526] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:06.943 [2024-11-29 09:33:34.598824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:06.943 [2024-11-29 09:33:34.622968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:06.943 [2024-11-29 09:33:34.623042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.511 09:33:35 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:07.511 09:33:35 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:16:07.511 09:33:35 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:16:07.511 09:33:35 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.511 09:33:35 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:07.511 [2024-11-29 09:33:35.233606] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:07.511 [2024-11-29 09:33:35.234512] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:07.769 09:33:35 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.769 09:33:35 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:07.769 09:33:35 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.769 09:33:35 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:07.769 malloc0 00:16:07.769 09:33:35 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.769 09:33:35 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:16:07.769 09:33:35 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.769 09:33:35 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:07.769 [2024-11-29 09:33:35.265727] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:16:07.769 [2024-11-29 09:33:35.265811] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:16:07.769 [2024-11-29 09:33:35.265825] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:07.769 [2024-11-29 09:33:35.265830] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:07.769 [2024-11-29 09:33:35.274694] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:07.769 [2024-11-29 09:33:35.274710] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:07.769 [2024-11-29 09:33:35.281613] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:07.769 [2024-11-29 09:33:35.281721] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:07.769 [2024-11-29 09:33:35.296617] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:07.769 1 00:16:07.769 09:33:35 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.769 09:33:35 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:16:08.704 09:33:36 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=87006 00:16:08.704 09:33:36 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:16:08.704 09:33:36 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:16:08.704 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:08.704 fio-3.35 00:16:08.704 Starting 1 process 00:16:13.977 09:33:41 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 86973 00:16:13.977 09:33:41 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:16:19.263 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 86973 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:16:19.263 09:33:46 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:19.263 09:33:46 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=87119 00:16:19.263 09:33:46 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:19.263 09:33:46 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 87119 00:16:19.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:19.263 09:33:46 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 87119 ']' 00:16:19.263 09:33:46 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:19.263 09:33:46 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:19.263 09:33:46 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:19.263 09:33:46 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:19.263 09:33:46 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:19.263 [2024-11-29 09:33:46.387287] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:16:19.263 [2024-11-29 09:33:46.387395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87119 ] 00:16:19.263 [2024-11-29 09:33:46.519016] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:19.263 [2024-11-29 09:33:46.539392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:19.263 [2024-11-29 09:33:46.556576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.263 [2024-11-29 09:33:46.556633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:19.521 09:33:47 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:19.521 09:33:47 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:16:19.521 09:33:47 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:16:19.521 09:33:47 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.521 09:33:47 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:19.521 [2024-11-29 09:33:47.186608] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:19.521 [2024-11-29 09:33:47.187561] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:19.521 09:33:47 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.521 09:33:47 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:19.521 09:33:47 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.521 09:33:47 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:19.521 malloc0 00:16:19.521 09:33:47 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.521 09:33:47 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:16:19.521 09:33:47 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.522 09:33:47 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:19.522 [2024-11-29 09:33:47.218728] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:16:19.522 [2024-11-29 09:33:47.218763] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:19.522 [2024-11-29 09:33:47.218772] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:19.522 [2024-11-29 09:33:47.226635] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:19.522 [2024-11-29 09:33:47.226662] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:16:19.522 [2024-11-29 09:33:47.226671] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:16:19.522 1 00:16:19.522 [2024-11-29 09:33:47.226752] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:16:19.522 09:33:47 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.522 09:33:47 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 87006 00:16:19.522 [2024-11-29 09:33:47.234611] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:16:19.522 [2024-11-29 09:33:47.240931] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:16:19.779 [2024-11-29 09:33:47.248805] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:16:19.779 [2024-11-29 09:33:47.248825] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:17:15.995 00:17:15.995 fio_test: (groupid=0, jobs=1): err= 0: pid=87009: Fri Nov 29 09:34:36 2024 00:17:15.995 read: IOPS=27.5k, BW=107MiB/s (112MB/s)(6437MiB/60003msec) 00:17:15.995 slat (nsec): min=935, max=1263.5k, avg=4923.64, stdev=1674.80 00:17:15.995 clat (usec): min=860, max=5947.4k, avg=2320.40, stdev=39569.46 00:17:15.995 lat (usec): min=870, max=5947.4k, avg=2325.32, stdev=39569.46 00:17:15.995 clat percentiles (usec): 00:17:15.995 | 1.00th=[ 1713], 5.00th=[ 1827], 10.00th=[ 1860], 20.00th=[ 1893], 00:17:15.995 | 30.00th=[ 1909], 40.00th=[ 1926], 50.00th=[ 1942], 60.00th=[ 1942], 00:17:15.995 | 70.00th=[ 1958], 80.00th=[ 1991], 90.00th=[ 2073], 95.00th=[ 2966], 00:17:15.995 | 99.00th=[ 5080], 99.50th=[ 5473], 99.90th=[ 7177], 99.95th=[ 8225], 00:17:15.995 | 99.99th=[13042] 00:17:15.995 bw ( KiB/s): min=33344, max=132976, per=100.00%, avg=120981.19, stdev=13672.51, samples=108 00:17:15.995 iops : min= 8336, max=33244, avg=30245.30, stdev=3418.13, samples=108 00:17:15.995 write: IOPS=27.4k, BW=107MiB/s (112MB/s)(6431MiB/60003msec); 0 zone resets 00:17:15.995 slat (nsec): min=938, max=227437, avg=4967.12, stdev=1377.68 00:17:15.995 clat (usec): min=915, max=5947.5k, avg=2331.86, stdev=34362.94 00:17:15.995 lat (usec): min=925, max=5947.5k, avg=2336.83, stdev=34362.93 00:17:15.995 clat percentiles (usec): 00:17:15.995 | 1.00th=[ 1762], 5.00th=[ 1909], 10.00th=[ 1942], 20.00th=[ 1975], 00:17:15.995 | 30.00th=[ 1991], 40.00th=[ 2008], 50.00th=[ 2024], 60.00th=[ 2040], 00:17:15.995 | 70.00th=[ 2057], 80.00th=[ 2073], 90.00th=[ 2180], 95.00th=[ 2835], 00:17:15.995 | 99.00th=[ 5080], 99.50th=[ 5538], 99.90th=[ 7111], 99.95th=[ 8094], 00:17:15.995 | 99.99th=[13173] 00:17:15.995 bw ( KiB/s): min=32928, max=133504, per=100.00%, avg=120881.26, stdev=13663.89, samples=108 00:17:15.995 iops : min= 8232, max=33376, avg=30220.31, stdev=3415.97, samples=108 00:17:15.995 lat (usec) : 1000=0.01% 00:17:15.995 lat (msec) : 2=58.36%, 4=39.20%, 10=2.40%, 20=0.03%, >=2000=0.01% 00:17:15.995 cpu : usr=6.20%, sys=27.85%, ctx=109464, majf=0, minf=13 00:17:15.995 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:17:15.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:15.995 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:15.995 issued rwts: total=1647760,1646278,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:15.995 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:15.995 00:17:15.995 Run status group 0 (all jobs): 00:17:15.995 READ: bw=107MiB/s (112MB/s), 107MiB/s-107MiB/s (112MB/s-112MB/s), io=6437MiB (6749MB), run=60003-60003msec 00:17:15.995 WRITE: bw=107MiB/s (112MB/s), 107MiB/s-107MiB/s (112MB/s-112MB/s), io=6431MiB (6743MB), run=60003-60003msec 00:17:15.995 00:17:15.995 Disk stats (read/write): 00:17:15.995 ublkb1: ios=1644411/1642864, merge=0/0, ticks=3731867/3612911, in_queue=7344779, util=99.89% 00:17:15.995 09:34:36 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:17:15.995 09:34:36 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.995 09:34:36 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:15.995 [2024-11-29 09:34:36.558934] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:15.995 [2024-11-29 09:34:36.586719] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:15.995 [2024-11-29 09:34:36.586858] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:15.995 [2024-11-29 09:34:36.594626] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:15.995 [2024-11-29 09:34:36.594762] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:15.995 [2024-11-29 09:34:36.594820] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:15.995 09:34:36 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.995 09:34:36 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:17:15.995 09:34:36 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.995 09:34:36 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:15.995 [2024-11-29 09:34:36.610652] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:15.995 [2024-11-29 09:34:36.611947] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:15.995 [2024-11-29 09:34:36.611973] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:15.995 09:34:36 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.995 09:34:36 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:17:15.995 09:34:36 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:17:15.995 09:34:36 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 87119 00:17:15.995 09:34:36 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 87119 ']' 00:17:15.995 09:34:36 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 87119 00:17:15.995 09:34:36 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:17:15.995 09:34:36 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:15.995 09:34:36 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87119 00:17:15.995 killing process with pid 87119 00:17:15.995 09:34:36 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:15.995 09:34:36 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:15.995 09:34:36 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87119' 00:17:15.995 09:34:36 ublk_recovery -- common/autotest_common.sh@973 -- # kill 87119 00:17:15.995 09:34:36 ublk_recovery -- common/autotest_common.sh@978 -- # wait 87119 00:17:15.995 [2024-11-29 09:34:36.810967] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:15.995 [2024-11-29 09:34:36.811020] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:15.995 ************************************ 00:17:15.995 END TEST ublk_recovery 00:17:15.995 ************************************ 00:17:15.995 00:17:15.995 real 1m2.889s 00:17:15.995 user 1m40.141s 00:17:15.995 sys 0m35.400s 00:17:15.995 09:34:37 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:15.995 09:34:37 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:15.995 09:34:37 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:17:15.995 09:34:37 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:17:15.995 09:34:37 -- spdk/autotest.sh@260 -- # timing_exit lib 00:17:15.995 09:34:37 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:15.995 09:34:37 -- common/autotest_common.sh@10 -- # set +x 00:17:15.995 09:34:37 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:17:15.995 09:34:37 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:17:15.995 09:34:37 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:17:15.995 09:34:37 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:17:15.995 09:34:37 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:17:15.995 09:34:37 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:17:15.995 09:34:37 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:17:15.995 09:34:37 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:17:15.995 09:34:37 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:17:15.995 09:34:37 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:17:15.995 09:34:37 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:15.995 09:34:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:15.995 09:34:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:15.995 09:34:37 -- common/autotest_common.sh@10 -- # set +x 00:17:15.995 ************************************ 00:17:15.995 START TEST ftl 00:17:15.995 ************************************ 00:17:15.996 09:34:37 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:15.996 * Looking for test storage... 00:17:15.996 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:15.996 09:34:37 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:15.996 09:34:37 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:17:15.996 09:34:37 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:15.996 09:34:37 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:15.996 09:34:37 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:15.996 09:34:37 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:15.996 09:34:37 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:15.996 09:34:37 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:17:15.996 09:34:37 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:17:15.996 09:34:37 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:17:15.996 09:34:37 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:17:15.996 09:34:37 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:17:15.996 09:34:37 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:17:15.996 09:34:37 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:17:15.996 09:34:37 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:15.996 09:34:37 ftl -- scripts/common.sh@344 -- # case "$op" in 00:17:15.996 09:34:37 ftl -- scripts/common.sh@345 -- # : 1 00:17:15.996 09:34:37 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:15.996 09:34:37 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:15.996 09:34:37 ftl -- scripts/common.sh@365 -- # decimal 1 00:17:15.996 09:34:37 ftl -- scripts/common.sh@353 -- # local d=1 00:17:15.996 09:34:37 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:15.996 09:34:37 ftl -- scripts/common.sh@355 -- # echo 1 00:17:15.996 09:34:37 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:17:15.996 09:34:37 ftl -- scripts/common.sh@366 -- # decimal 2 00:17:15.996 09:34:37 ftl -- scripts/common.sh@353 -- # local d=2 00:17:15.996 09:34:37 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:15.996 09:34:37 ftl -- scripts/common.sh@355 -- # echo 2 00:17:15.996 09:34:37 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:17:15.996 09:34:37 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:15.996 09:34:37 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:15.996 09:34:37 ftl -- scripts/common.sh@368 -- # return 0 00:17:15.996 09:34:37 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:15.996 09:34:37 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:15.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:15.996 --rc genhtml_branch_coverage=1 00:17:15.996 --rc genhtml_function_coverage=1 00:17:15.996 --rc genhtml_legend=1 00:17:15.996 --rc geninfo_all_blocks=1 00:17:15.996 --rc geninfo_unexecuted_blocks=1 00:17:15.996 00:17:15.996 ' 00:17:15.996 09:34:37 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:15.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:15.996 --rc genhtml_branch_coverage=1 00:17:15.996 --rc genhtml_function_coverage=1 00:17:15.996 --rc genhtml_legend=1 00:17:15.996 --rc geninfo_all_blocks=1 00:17:15.996 --rc geninfo_unexecuted_blocks=1 00:17:15.996 00:17:15.996 ' 00:17:15.996 09:34:37 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:15.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:15.996 --rc genhtml_branch_coverage=1 00:17:15.996 --rc genhtml_function_coverage=1 00:17:15.996 --rc genhtml_legend=1 00:17:15.996 --rc geninfo_all_blocks=1 00:17:15.996 --rc geninfo_unexecuted_blocks=1 00:17:15.996 00:17:15.996 ' 00:17:15.996 09:34:37 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:15.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:15.996 --rc genhtml_branch_coverage=1 00:17:15.996 --rc genhtml_function_coverage=1 00:17:15.996 --rc genhtml_legend=1 00:17:15.996 --rc geninfo_all_blocks=1 00:17:15.996 --rc geninfo_unexecuted_blocks=1 00:17:15.996 00:17:15.996 ' 00:17:15.996 09:34:37 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:15.996 09:34:37 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:15.996 09:34:37 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:15.996 09:34:37 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:15.996 09:34:37 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:15.996 09:34:37 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:15.996 09:34:37 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:15.996 09:34:37 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:15.996 09:34:37 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:15.996 09:34:37 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:15.996 09:34:37 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:15.996 09:34:37 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:15.996 09:34:37 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:15.996 09:34:37 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:15.996 09:34:37 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:15.996 09:34:37 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:15.996 09:34:37 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:15.996 09:34:37 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:15.996 09:34:37 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:15.996 09:34:37 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:15.996 09:34:37 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:15.996 09:34:37 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:15.996 09:34:37 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:15.996 09:34:37 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:15.996 09:34:37 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:15.996 09:34:37 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:15.996 09:34:37 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:15.996 09:34:37 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:15.996 09:34:37 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:15.996 09:34:37 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:15.996 09:34:37 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:17:15.996 09:34:37 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:17:15.996 09:34:37 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:17:15.996 09:34:37 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:17:15.996 09:34:37 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:15.996 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:15.996 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:15.996 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:15.996 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:15.996 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:15.996 09:34:37 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=87915 00:17:15.996 09:34:37 ftl -- ftl/ftl.sh@38 -- # waitforlisten 87915 00:17:15.996 09:34:37 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:17:15.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:15.996 09:34:37 ftl -- common/autotest_common.sh@835 -- # '[' -z 87915 ']' 00:17:15.996 09:34:37 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:15.996 09:34:37 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:15.996 09:34:37 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:15.996 09:34:37 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:15.996 09:34:37 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:15.996 [2024-11-29 09:34:37.875252] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:17:15.996 [2024-11-29 09:34:37.875610] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87915 ] 00:17:15.996 [2024-11-29 09:34:38.012714] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:15.996 [2024-11-29 09:34:38.037747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.996 [2024-11-29 09:34:38.060427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:15.996 09:34:38 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:15.996 09:34:38 ftl -- common/autotest_common.sh@868 -- # return 0 00:17:15.996 09:34:38 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:17:15.996 09:34:38 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:17:15.996 09:34:39 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:15.996 09:34:39 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:17:15.996 09:34:39 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:17:15.996 09:34:39 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:15.996 09:34:39 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:15.996 09:34:39 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:17:15.996 09:34:39 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:17:15.996 09:34:39 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:17:15.996 09:34:39 ftl -- ftl/ftl.sh@50 -- # break 00:17:15.996 09:34:39 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:17:15.996 09:34:39 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:17:15.996 09:34:39 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:15.996 09:34:39 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:15.996 09:34:40 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:17:15.996 09:34:40 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:17:15.996 09:34:40 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:17:15.996 09:34:40 ftl -- ftl/ftl.sh@63 -- # break 00:17:15.996 09:34:40 ftl -- ftl/ftl.sh@66 -- # killprocess 87915 00:17:15.996 09:34:40 ftl -- common/autotest_common.sh@954 -- # '[' -z 87915 ']' 00:17:15.996 09:34:40 ftl -- common/autotest_common.sh@958 -- # kill -0 87915 00:17:15.997 09:34:40 ftl -- common/autotest_common.sh@959 -- # uname 00:17:15.997 09:34:40 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:15.997 09:34:40 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87915 00:17:15.997 killing process with pid 87915 00:17:15.997 09:34:40 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:15.997 09:34:40 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:15.997 09:34:40 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87915' 00:17:15.997 09:34:40 ftl -- common/autotest_common.sh@973 -- # kill 87915 00:17:15.997 09:34:40 ftl -- common/autotest_common.sh@978 -- # wait 87915 00:17:15.997 09:34:40 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:17:15.997 09:34:40 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:15.997 09:34:40 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:15.997 09:34:40 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:15.997 09:34:40 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:15.997 ************************************ 00:17:15.997 START TEST ftl_fio_basic 00:17:15.997 ************************************ 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:15.997 * Looking for test storage... 00:17:15.997 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:15.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:15.997 --rc genhtml_branch_coverage=1 00:17:15.997 --rc genhtml_function_coverage=1 00:17:15.997 --rc genhtml_legend=1 00:17:15.997 --rc geninfo_all_blocks=1 00:17:15.997 --rc geninfo_unexecuted_blocks=1 00:17:15.997 00:17:15.997 ' 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:15.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:15.997 --rc genhtml_branch_coverage=1 00:17:15.997 --rc genhtml_function_coverage=1 00:17:15.997 --rc genhtml_legend=1 00:17:15.997 --rc geninfo_all_blocks=1 00:17:15.997 --rc geninfo_unexecuted_blocks=1 00:17:15.997 00:17:15.997 ' 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:15.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:15.997 --rc genhtml_branch_coverage=1 00:17:15.997 --rc genhtml_function_coverage=1 00:17:15.997 --rc genhtml_legend=1 00:17:15.997 --rc geninfo_all_blocks=1 00:17:15.997 --rc geninfo_unexecuted_blocks=1 00:17:15.997 00:17:15.997 ' 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:15.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:15.997 --rc genhtml_branch_coverage=1 00:17:15.997 --rc genhtml_function_coverage=1 00:17:15.997 --rc genhtml_legend=1 00:17:15.997 --rc geninfo_all_blocks=1 00:17:15.997 --rc geninfo_unexecuted_blocks=1 00:17:15.997 00:17:15.997 ' 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=88036 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 88036 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 88036 ']' 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:15.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:15.997 09:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:15.998 09:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:15.998 09:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:15.998 [2024-11-29 09:34:40.609867] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:17:15.998 [2024-11-29 09:34:40.610273] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88036 ] 00:17:15.998 [2024-11-29 09:34:40.748876] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:15.998 [2024-11-29 09:34:40.775433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:15.998 [2024-11-29 09:34:40.801513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:15.998 [2024-11-29 09:34:40.801706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:15.998 [2024-11-29 09:34:40.801606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:15.998 09:34:41 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:15.998 09:34:41 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:17:15.998 09:34:41 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:15.998 09:34:41 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:17:15.998 09:34:41 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:15.998 09:34:41 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:17:15.998 09:34:41 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:17:15.998 09:34:41 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:15.998 09:34:41 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:15.998 09:34:41 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:17:15.998 09:34:41 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:15.998 09:34:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:17:15.998 09:34:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:15.998 09:34:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:15.998 09:34:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:15.998 09:34:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:15.998 09:34:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:15.998 { 00:17:15.998 "name": "nvme0n1", 00:17:15.998 "aliases": [ 00:17:15.998 "f8211aae-d454-4aad-947c-21e5eba62653" 00:17:15.998 ], 00:17:15.998 "product_name": "NVMe disk", 00:17:15.998 "block_size": 4096, 00:17:15.998 "num_blocks": 1310720, 00:17:15.998 "uuid": "f8211aae-d454-4aad-947c-21e5eba62653", 00:17:15.998 "numa_id": -1, 00:17:15.998 "assigned_rate_limits": { 00:17:15.998 "rw_ios_per_sec": 0, 00:17:15.998 "rw_mbytes_per_sec": 0, 00:17:15.998 "r_mbytes_per_sec": 0, 00:17:15.998 "w_mbytes_per_sec": 0 00:17:15.998 }, 00:17:15.998 "claimed": false, 00:17:15.998 "zoned": false, 00:17:15.998 "supported_io_types": { 00:17:15.998 "read": true, 00:17:15.998 "write": true, 00:17:15.998 "unmap": true, 00:17:15.998 "flush": true, 00:17:15.998 "reset": true, 00:17:15.998 "nvme_admin": true, 00:17:15.998 "nvme_io": true, 00:17:15.998 "nvme_io_md": false, 00:17:15.998 "write_zeroes": true, 00:17:15.998 "zcopy": false, 00:17:15.998 "get_zone_info": false, 00:17:15.998 "zone_management": false, 00:17:15.998 "zone_append": false, 00:17:15.998 "compare": true, 00:17:15.998 "compare_and_write": false, 00:17:15.998 "abort": true, 00:17:15.998 "seek_hole": false, 00:17:15.998 "seek_data": false, 00:17:15.998 "copy": true, 00:17:15.998 "nvme_iov_md": false 00:17:15.998 }, 00:17:15.998 "driver_specific": { 00:17:15.998 "nvme": [ 00:17:15.998 { 00:17:15.998 "pci_address": "0000:00:11.0", 00:17:15.998 "trid": { 00:17:15.998 "trtype": "PCIe", 00:17:15.998 "traddr": "0000:00:11.0" 00:17:15.998 }, 00:17:15.998 "ctrlr_data": { 00:17:15.998 "cntlid": 0, 00:17:15.998 "vendor_id": "0x1b36", 00:17:15.998 "model_number": "QEMU NVMe Ctrl", 00:17:15.998 "serial_number": "12341", 00:17:15.998 "firmware_revision": "8.0.0", 00:17:15.998 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:15.998 "oacs": { 00:17:15.998 "security": 0, 00:17:15.998 "format": 1, 00:17:15.998 "firmware": 0, 00:17:15.998 "ns_manage": 1 00:17:15.998 }, 00:17:15.998 "multi_ctrlr": false, 00:17:15.998 "ana_reporting": false 00:17:15.998 }, 00:17:15.998 "vs": { 00:17:15.998 "nvme_version": "1.4" 00:17:15.998 }, 00:17:15.998 "ns_data": { 00:17:15.998 "id": 1, 00:17:15.998 "can_share": false 00:17:15.998 } 00:17:15.998 } 00:17:15.998 ], 00:17:15.998 "mp_policy": "active_passive" 00:17:15.998 } 00:17:15.998 } 00:17:15.998 ]' 00:17:15.998 09:34:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:15.998 09:34:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:15.998 09:34:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:15.998 09:34:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:17:15.998 09:34:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:17:15.998 09:34:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:17:15.998 09:34:41 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:17:15.998 09:34:41 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:15.998 09:34:41 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:17:15.998 09:34:41 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:15.998 09:34:41 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:15.998 09:34:42 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:17:15.998 09:34:42 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:15.998 09:34:42 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=3d0f6768-1439-4fd2-813d-8987bd8c2031 00:17:15.998 09:34:42 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 3d0f6768-1439-4fd2-813d-8987bd8c2031 00:17:15.998 09:34:42 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=a0cb1636-1685-4bf4-8214-6f023f15fd20 00:17:15.998 09:34:42 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 a0cb1636-1685-4bf4-8214-6f023f15fd20 00:17:15.998 09:34:42 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:17:15.998 09:34:42 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:15.998 09:34:42 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=a0cb1636-1685-4bf4-8214-6f023f15fd20 00:17:15.998 09:34:42 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:17:15.998 09:34:42 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size a0cb1636-1685-4bf4-8214-6f023f15fd20 00:17:15.998 09:34:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=a0cb1636-1685-4bf4-8214-6f023f15fd20 00:17:15.998 09:34:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:15.998 09:34:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:15.998 09:34:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:15.998 09:34:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a0cb1636-1685-4bf4-8214-6f023f15fd20 00:17:15.998 09:34:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:15.998 { 00:17:15.998 "name": "a0cb1636-1685-4bf4-8214-6f023f15fd20", 00:17:15.998 "aliases": [ 00:17:15.998 "lvs/nvme0n1p0" 00:17:15.998 ], 00:17:15.998 "product_name": "Logical Volume", 00:17:15.998 "block_size": 4096, 00:17:15.998 "num_blocks": 26476544, 00:17:15.998 "uuid": "a0cb1636-1685-4bf4-8214-6f023f15fd20", 00:17:15.998 "assigned_rate_limits": { 00:17:15.998 "rw_ios_per_sec": 0, 00:17:15.998 "rw_mbytes_per_sec": 0, 00:17:15.998 "r_mbytes_per_sec": 0, 00:17:15.998 "w_mbytes_per_sec": 0 00:17:15.998 }, 00:17:15.998 "claimed": false, 00:17:15.998 "zoned": false, 00:17:15.998 "supported_io_types": { 00:17:15.998 "read": true, 00:17:15.998 "write": true, 00:17:15.998 "unmap": true, 00:17:15.998 "flush": false, 00:17:15.998 "reset": true, 00:17:15.998 "nvme_admin": false, 00:17:15.998 "nvme_io": false, 00:17:15.998 "nvme_io_md": false, 00:17:15.998 "write_zeroes": true, 00:17:15.998 "zcopy": false, 00:17:15.998 "get_zone_info": false, 00:17:15.998 "zone_management": false, 00:17:15.998 "zone_append": false, 00:17:15.998 "compare": false, 00:17:15.998 "compare_and_write": false, 00:17:15.998 "abort": false, 00:17:15.998 "seek_hole": true, 00:17:15.998 "seek_data": true, 00:17:15.998 "copy": false, 00:17:15.998 "nvme_iov_md": false 00:17:15.998 }, 00:17:15.998 "driver_specific": { 00:17:15.998 "lvol": { 00:17:15.998 "lvol_store_uuid": "3d0f6768-1439-4fd2-813d-8987bd8c2031", 00:17:15.998 "base_bdev": "nvme0n1", 00:17:15.998 "thin_provision": true, 00:17:15.998 "num_allocated_clusters": 0, 00:17:15.998 "snapshot": false, 00:17:15.998 "clone": false, 00:17:15.998 "esnap_clone": false 00:17:15.998 } 00:17:15.998 } 00:17:15.998 } 00:17:15.998 ]' 00:17:15.998 09:34:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:15.998 09:34:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:15.998 09:34:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:15.998 09:34:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:15.998 09:34:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:15.999 09:34:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:15.999 09:34:42 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:17:15.999 09:34:42 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:17:15.999 09:34:42 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:15.999 09:34:43 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:15.999 09:34:43 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:15.999 09:34:43 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size a0cb1636-1685-4bf4-8214-6f023f15fd20 00:17:15.999 09:34:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=a0cb1636-1685-4bf4-8214-6f023f15fd20 00:17:15.999 09:34:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:15.999 09:34:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:15.999 09:34:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:15.999 09:34:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a0cb1636-1685-4bf4-8214-6f023f15fd20 00:17:15.999 09:34:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:15.999 { 00:17:15.999 "name": "a0cb1636-1685-4bf4-8214-6f023f15fd20", 00:17:15.999 "aliases": [ 00:17:15.999 "lvs/nvme0n1p0" 00:17:15.999 ], 00:17:15.999 "product_name": "Logical Volume", 00:17:15.999 "block_size": 4096, 00:17:15.999 "num_blocks": 26476544, 00:17:15.999 "uuid": "a0cb1636-1685-4bf4-8214-6f023f15fd20", 00:17:15.999 "assigned_rate_limits": { 00:17:15.999 "rw_ios_per_sec": 0, 00:17:15.999 "rw_mbytes_per_sec": 0, 00:17:15.999 "r_mbytes_per_sec": 0, 00:17:15.999 "w_mbytes_per_sec": 0 00:17:15.999 }, 00:17:15.999 "claimed": false, 00:17:15.999 "zoned": false, 00:17:15.999 "supported_io_types": { 00:17:15.999 "read": true, 00:17:15.999 "write": true, 00:17:15.999 "unmap": true, 00:17:15.999 "flush": false, 00:17:15.999 "reset": true, 00:17:15.999 "nvme_admin": false, 00:17:15.999 "nvme_io": false, 00:17:15.999 "nvme_io_md": false, 00:17:15.999 "write_zeroes": true, 00:17:15.999 "zcopy": false, 00:17:15.999 "get_zone_info": false, 00:17:15.999 "zone_management": false, 00:17:15.999 "zone_append": false, 00:17:15.999 "compare": false, 00:17:15.999 "compare_and_write": false, 00:17:15.999 "abort": false, 00:17:15.999 "seek_hole": true, 00:17:15.999 "seek_data": true, 00:17:15.999 "copy": false, 00:17:15.999 "nvme_iov_md": false 00:17:15.999 }, 00:17:15.999 "driver_specific": { 00:17:15.999 "lvol": { 00:17:15.999 "lvol_store_uuid": "3d0f6768-1439-4fd2-813d-8987bd8c2031", 00:17:15.999 "base_bdev": "nvme0n1", 00:17:15.999 "thin_provision": true, 00:17:15.999 "num_allocated_clusters": 0, 00:17:15.999 "snapshot": false, 00:17:15.999 "clone": false, 00:17:15.999 "esnap_clone": false 00:17:15.999 } 00:17:15.999 } 00:17:15.999 } 00:17:15.999 ]' 00:17:15.999 09:34:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:15.999 09:34:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:15.999 09:34:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:15.999 09:34:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:15.999 09:34:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:15.999 09:34:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:15.999 09:34:43 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:17:15.999 09:34:43 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:15.999 09:34:43 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:17:15.999 09:34:43 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:17:15.999 09:34:43 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:17:15.999 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:17:15.999 09:34:43 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size a0cb1636-1685-4bf4-8214-6f023f15fd20 00:17:15.999 09:34:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=a0cb1636-1685-4bf4-8214-6f023f15fd20 00:17:15.999 09:34:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:15.999 09:34:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:15.999 09:34:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:15.999 09:34:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a0cb1636-1685-4bf4-8214-6f023f15fd20 00:17:16.259 09:34:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:16.259 { 00:17:16.259 "name": "a0cb1636-1685-4bf4-8214-6f023f15fd20", 00:17:16.259 "aliases": [ 00:17:16.259 "lvs/nvme0n1p0" 00:17:16.259 ], 00:17:16.259 "product_name": "Logical Volume", 00:17:16.259 "block_size": 4096, 00:17:16.259 "num_blocks": 26476544, 00:17:16.259 "uuid": "a0cb1636-1685-4bf4-8214-6f023f15fd20", 00:17:16.259 "assigned_rate_limits": { 00:17:16.259 "rw_ios_per_sec": 0, 00:17:16.259 "rw_mbytes_per_sec": 0, 00:17:16.259 "r_mbytes_per_sec": 0, 00:17:16.259 "w_mbytes_per_sec": 0 00:17:16.259 }, 00:17:16.259 "claimed": false, 00:17:16.259 "zoned": false, 00:17:16.259 "supported_io_types": { 00:17:16.259 "read": true, 00:17:16.259 "write": true, 00:17:16.259 "unmap": true, 00:17:16.259 "flush": false, 00:17:16.259 "reset": true, 00:17:16.259 "nvme_admin": false, 00:17:16.259 "nvme_io": false, 00:17:16.259 "nvme_io_md": false, 00:17:16.259 "write_zeroes": true, 00:17:16.259 "zcopy": false, 00:17:16.259 "get_zone_info": false, 00:17:16.259 "zone_management": false, 00:17:16.259 "zone_append": false, 00:17:16.259 "compare": false, 00:17:16.259 "compare_and_write": false, 00:17:16.259 "abort": false, 00:17:16.259 "seek_hole": true, 00:17:16.260 "seek_data": true, 00:17:16.260 "copy": false, 00:17:16.260 "nvme_iov_md": false 00:17:16.260 }, 00:17:16.260 "driver_specific": { 00:17:16.260 "lvol": { 00:17:16.260 "lvol_store_uuid": "3d0f6768-1439-4fd2-813d-8987bd8c2031", 00:17:16.260 "base_bdev": "nvme0n1", 00:17:16.260 "thin_provision": true, 00:17:16.260 "num_allocated_clusters": 0, 00:17:16.260 "snapshot": false, 00:17:16.260 "clone": false, 00:17:16.260 "esnap_clone": false 00:17:16.260 } 00:17:16.260 } 00:17:16.260 } 00:17:16.260 ]' 00:17:16.260 09:34:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:16.260 09:34:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:16.260 09:34:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:16.260 09:34:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:16.260 09:34:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:16.260 09:34:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:16.260 09:34:43 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:17:16.260 09:34:43 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:17:16.260 09:34:43 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d a0cb1636-1685-4bf4-8214-6f023f15fd20 -c nvc0n1p0 --l2p_dram_limit 60 00:17:16.519 [2024-11-29 09:34:44.021926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.519 [2024-11-29 09:34:44.021962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:16.519 [2024-11-29 09:34:44.021982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:16.519 [2024-11-29 09:34:44.021989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.519 [2024-11-29 09:34:44.022056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.519 [2024-11-29 09:34:44.022064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:16.519 [2024-11-29 09:34:44.022073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:17:16.519 [2024-11-29 09:34:44.022079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.519 [2024-11-29 09:34:44.022117] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:16.519 [2024-11-29 09:34:44.022329] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:16.519 [2024-11-29 09:34:44.022343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.519 [2024-11-29 09:34:44.022348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:16.519 [2024-11-29 09:34:44.022356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.230 ms 00:17:16.519 [2024-11-29 09:34:44.022361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.519 [2024-11-29 09:34:44.022395] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 700c8d61-4e0d-4599-a146-cab694a5ba02 00:17:16.519 [2024-11-29 09:34:44.023474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.519 [2024-11-29 09:34:44.023496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:16.519 [2024-11-29 09:34:44.023503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:17:16.519 [2024-11-29 09:34:44.023522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.519 [2024-11-29 09:34:44.028661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.519 [2024-11-29 09:34:44.028691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:16.519 [2024-11-29 09:34:44.028699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.038 ms 00:17:16.519 [2024-11-29 09:34:44.028709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.519 [2024-11-29 09:34:44.028779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.519 [2024-11-29 09:34:44.028787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:16.519 [2024-11-29 09:34:44.028793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:17:16.519 [2024-11-29 09:34:44.028800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.519 [2024-11-29 09:34:44.028844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.519 [2024-11-29 09:34:44.028864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:16.519 [2024-11-29 09:34:44.028870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:16.519 [2024-11-29 09:34:44.028879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.519 [2024-11-29 09:34:44.028915] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:16.519 [2024-11-29 09:34:44.030217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.519 [2024-11-29 09:34:44.030244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:16.519 [2024-11-29 09:34:44.030254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.313 ms 00:17:16.519 [2024-11-29 09:34:44.030271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.519 [2024-11-29 09:34:44.030318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.519 [2024-11-29 09:34:44.030333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:16.519 [2024-11-29 09:34:44.030343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:16.519 [2024-11-29 09:34:44.030351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.519 [2024-11-29 09:34:44.030377] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:16.519 [2024-11-29 09:34:44.030499] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:16.519 [2024-11-29 09:34:44.030511] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:16.519 [2024-11-29 09:34:44.030522] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:16.519 [2024-11-29 09:34:44.030532] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:16.519 [2024-11-29 09:34:44.030539] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:16.519 [2024-11-29 09:34:44.030547] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:16.519 [2024-11-29 09:34:44.030553] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:16.519 [2024-11-29 09:34:44.030560] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:16.519 [2024-11-29 09:34:44.030565] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:16.519 [2024-11-29 09:34:44.030572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.519 [2024-11-29 09:34:44.030578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:16.519 [2024-11-29 09:34:44.030601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:17:16.519 [2024-11-29 09:34:44.030607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.519 [2024-11-29 09:34:44.030695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.519 [2024-11-29 09:34:44.030703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:16.519 [2024-11-29 09:34:44.030710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:17:16.519 [2024-11-29 09:34:44.030715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.519 [2024-11-29 09:34:44.030809] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:16.519 [2024-11-29 09:34:44.030816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:16.519 [2024-11-29 09:34:44.030823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:16.520 [2024-11-29 09:34:44.030829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:16.520 [2024-11-29 09:34:44.030836] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:16.520 [2024-11-29 09:34:44.030841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:16.520 [2024-11-29 09:34:44.030847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:16.520 [2024-11-29 09:34:44.030852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:16.520 [2024-11-29 09:34:44.030859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:16.520 [2024-11-29 09:34:44.030864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:16.520 [2024-11-29 09:34:44.030870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:16.520 [2024-11-29 09:34:44.030876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:16.520 [2024-11-29 09:34:44.030883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:16.520 [2024-11-29 09:34:44.030891] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:16.520 [2024-11-29 09:34:44.030897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:16.520 [2024-11-29 09:34:44.030902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:16.520 [2024-11-29 09:34:44.030909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:16.520 [2024-11-29 09:34:44.030925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:16.520 [2024-11-29 09:34:44.030932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:16.520 [2024-11-29 09:34:44.030937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:16.520 [2024-11-29 09:34:44.030943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:16.520 [2024-11-29 09:34:44.030949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:16.520 [2024-11-29 09:34:44.030955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:16.520 [2024-11-29 09:34:44.030960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:16.520 [2024-11-29 09:34:44.030966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:16.520 [2024-11-29 09:34:44.030971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:16.520 [2024-11-29 09:34:44.030977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:16.520 [2024-11-29 09:34:44.030982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:16.520 [2024-11-29 09:34:44.030989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:16.520 [2024-11-29 09:34:44.030994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:16.520 [2024-11-29 09:34:44.031000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:16.520 [2024-11-29 09:34:44.031004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:16.520 [2024-11-29 09:34:44.031011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:16.520 [2024-11-29 09:34:44.031016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:16.520 [2024-11-29 09:34:44.031022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:16.520 [2024-11-29 09:34:44.031027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:16.520 [2024-11-29 09:34:44.031033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:16.520 [2024-11-29 09:34:44.031037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:16.520 [2024-11-29 09:34:44.031043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:16.520 [2024-11-29 09:34:44.031048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:16.520 [2024-11-29 09:34:44.031054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:16.520 [2024-11-29 09:34:44.031058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:16.520 [2024-11-29 09:34:44.031065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:16.520 [2024-11-29 09:34:44.031069] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:16.520 [2024-11-29 09:34:44.031078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:16.520 [2024-11-29 09:34:44.031095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:16.520 [2024-11-29 09:34:44.031101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:16.520 [2024-11-29 09:34:44.031107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:16.520 [2024-11-29 09:34:44.031114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:16.520 [2024-11-29 09:34:44.031119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:16.520 [2024-11-29 09:34:44.031126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:16.520 [2024-11-29 09:34:44.031131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:16.520 [2024-11-29 09:34:44.031137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:16.520 [2024-11-29 09:34:44.031145] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:16.520 [2024-11-29 09:34:44.031154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:16.520 [2024-11-29 09:34:44.031169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:16.520 [2024-11-29 09:34:44.031176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:16.520 [2024-11-29 09:34:44.031181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:16.520 [2024-11-29 09:34:44.031187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:16.520 [2024-11-29 09:34:44.031192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:16.520 [2024-11-29 09:34:44.031200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:16.520 [2024-11-29 09:34:44.031205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:16.520 [2024-11-29 09:34:44.031212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:16.520 [2024-11-29 09:34:44.031216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:16.520 [2024-11-29 09:34:44.031222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:16.520 [2024-11-29 09:34:44.031228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:16.520 [2024-11-29 09:34:44.031234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:16.520 [2024-11-29 09:34:44.031240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:16.520 [2024-11-29 09:34:44.031247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:16.520 [2024-11-29 09:34:44.031251] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:16.520 [2024-11-29 09:34:44.031259] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:16.520 [2024-11-29 09:34:44.031265] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:16.520 [2024-11-29 09:34:44.031272] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:16.520 [2024-11-29 09:34:44.031277] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:16.520 [2024-11-29 09:34:44.031284] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:16.520 [2024-11-29 09:34:44.031290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.520 [2024-11-29 09:34:44.031299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:16.520 [2024-11-29 09:34:44.031306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.532 ms 00:17:16.520 [2024-11-29 09:34:44.031313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.520 [2024-11-29 09:34:44.031385] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:16.520 [2024-11-29 09:34:44.031394] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:19.049 [2024-11-29 09:34:46.315367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.049 [2024-11-29 09:34:46.315425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:19.049 [2024-11-29 09:34:46.315439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2283.972 ms 00:17:19.049 [2024-11-29 09:34:46.315449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.049 [2024-11-29 09:34:46.323968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.049 [2024-11-29 09:34:46.324021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:19.049 [2024-11-29 09:34:46.324043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.453 ms 00:17:19.049 [2024-11-29 09:34:46.324055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.049 [2024-11-29 09:34:46.324145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.049 [2024-11-29 09:34:46.324156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:19.049 [2024-11-29 09:34:46.324164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:17:19.049 [2024-11-29 09:34:46.324173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.049 [2024-11-29 09:34:46.341797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.049 [2024-11-29 09:34:46.341854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:19.049 [2024-11-29 09:34:46.341873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.566 ms 00:17:19.049 [2024-11-29 09:34:46.341889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.049 [2024-11-29 09:34:46.341943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.049 [2024-11-29 09:34:46.341960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:19.049 [2024-11-29 09:34:46.341973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:19.049 [2024-11-29 09:34:46.341987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.049 [2024-11-29 09:34:46.342412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.049 [2024-11-29 09:34:46.342441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:19.049 [2024-11-29 09:34:46.342473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:17:19.049 [2024-11-29 09:34:46.342490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.049 [2024-11-29 09:34:46.342694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.049 [2024-11-29 09:34:46.342711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:19.049 [2024-11-29 09:34:46.342724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.165 ms 00:17:19.049 [2024-11-29 09:34:46.342738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.049 [2024-11-29 09:34:46.349414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.049 [2024-11-29 09:34:46.349658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:19.049 [2024-11-29 09:34:46.349684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.619 ms 00:17:19.049 [2024-11-29 09:34:46.349713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.049 [2024-11-29 09:34:46.359241] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:19.049 [2024-11-29 09:34:46.373621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.049 [2024-11-29 09:34:46.373651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:19.049 [2024-11-29 09:34:46.373664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.793 ms 00:17:19.049 [2024-11-29 09:34:46.373674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.049 [2024-11-29 09:34:46.448928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.049 [2024-11-29 09:34:46.448990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:19.049 [2024-11-29 09:34:46.449015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.214 ms 00:17:19.049 [2024-11-29 09:34:46.449027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.049 [2024-11-29 09:34:46.449322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.049 [2024-11-29 09:34:46.449349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:19.049 [2024-11-29 09:34:46.449364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:17:19.049 [2024-11-29 09:34:46.449375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.049 [2024-11-29 09:34:46.452301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.049 [2024-11-29 09:34:46.452446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:19.049 [2024-11-29 09:34:46.452467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.859 ms 00:17:19.049 [2024-11-29 09:34:46.452475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.049 [2024-11-29 09:34:46.454760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.049 [2024-11-29 09:34:46.454783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:19.049 [2024-11-29 09:34:46.454794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.240 ms 00:17:19.049 [2024-11-29 09:34:46.454801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.049 [2024-11-29 09:34:46.455453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.049 [2024-11-29 09:34:46.455603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:19.049 [2024-11-29 09:34:46.455626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:17:19.049 [2024-11-29 09:34:46.455636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.049 [2024-11-29 09:34:46.478885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.049 [2024-11-29 09:34:46.479008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:19.049 [2024-11-29 09:34:46.479075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.190 ms 00:17:19.049 [2024-11-29 09:34:46.479099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.049 [2024-11-29 09:34:46.482678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.049 [2024-11-29 09:34:46.482788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:19.049 [2024-11-29 09:34:46.482839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.453 ms 00:17:19.049 [2024-11-29 09:34:46.482883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.049 [2024-11-29 09:34:46.485509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.049 [2024-11-29 09:34:46.485618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:19.049 [2024-11-29 09:34:46.485673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.564 ms 00:17:19.049 [2024-11-29 09:34:46.485695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.050 [2024-11-29 09:34:46.488462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.050 [2024-11-29 09:34:46.488561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:19.050 [2024-11-29 09:34:46.488646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.674 ms 00:17:19.050 [2024-11-29 09:34:46.488709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.050 [2024-11-29 09:34:46.488771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.050 [2024-11-29 09:34:46.488797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:19.050 [2024-11-29 09:34:46.488848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:19.050 [2024-11-29 09:34:46.488880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.050 [2024-11-29 09:34:46.488972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.050 [2024-11-29 09:34:46.489002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:19.050 [2024-11-29 09:34:46.489076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:19.050 [2024-11-29 09:34:46.489098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.050 [2024-11-29 09:34:46.490090] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2467.719 ms, result 0 00:17:19.050 { 00:17:19.050 "name": "ftl0", 00:17:19.050 "uuid": "700c8d61-4e0d-4599-a146-cab694a5ba02" 00:17:19.050 } 00:17:19.050 09:34:46 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:17:19.050 09:34:46 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:17:19.050 09:34:46 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:19.050 09:34:46 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:17:19.050 09:34:46 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:19.050 09:34:46 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:19.050 09:34:46 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:19.050 09:34:46 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:19.308 [ 00:17:19.308 { 00:17:19.308 "name": "ftl0", 00:17:19.308 "aliases": [ 00:17:19.308 "700c8d61-4e0d-4599-a146-cab694a5ba02" 00:17:19.308 ], 00:17:19.308 "product_name": "FTL disk", 00:17:19.308 "block_size": 4096, 00:17:19.308 "num_blocks": 20971520, 00:17:19.308 "uuid": "700c8d61-4e0d-4599-a146-cab694a5ba02", 00:17:19.308 "assigned_rate_limits": { 00:17:19.308 "rw_ios_per_sec": 0, 00:17:19.308 "rw_mbytes_per_sec": 0, 00:17:19.308 "r_mbytes_per_sec": 0, 00:17:19.308 "w_mbytes_per_sec": 0 00:17:19.308 }, 00:17:19.308 "claimed": false, 00:17:19.308 "zoned": false, 00:17:19.308 "supported_io_types": { 00:17:19.308 "read": true, 00:17:19.308 "write": true, 00:17:19.308 "unmap": true, 00:17:19.308 "flush": true, 00:17:19.308 "reset": false, 00:17:19.308 "nvme_admin": false, 00:17:19.308 "nvme_io": false, 00:17:19.308 "nvme_io_md": false, 00:17:19.308 "write_zeroes": true, 00:17:19.308 "zcopy": false, 00:17:19.308 "get_zone_info": false, 00:17:19.308 "zone_management": false, 00:17:19.308 "zone_append": false, 00:17:19.308 "compare": false, 00:17:19.308 "compare_and_write": false, 00:17:19.308 "abort": false, 00:17:19.308 "seek_hole": false, 00:17:19.308 "seek_data": false, 00:17:19.308 "copy": false, 00:17:19.308 "nvme_iov_md": false 00:17:19.308 }, 00:17:19.308 "driver_specific": { 00:17:19.308 "ftl": { 00:17:19.308 "base_bdev": "a0cb1636-1685-4bf4-8214-6f023f15fd20", 00:17:19.308 "cache": "nvc0n1p0" 00:17:19.308 } 00:17:19.308 } 00:17:19.308 } 00:17:19.308 ] 00:17:19.308 09:34:46 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:17:19.308 09:34:46 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:17:19.308 09:34:46 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:19.567 09:34:47 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:17:19.567 09:34:47 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:19.827 [2024-11-29 09:34:47.312365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.827 [2024-11-29 09:34:47.312479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:19.827 [2024-11-29 09:34:47.312524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:19.827 [2024-11-29 09:34:47.312545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.827 [2024-11-29 09:34:47.312600] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:19.827 [2024-11-29 09:34:47.313043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.827 [2024-11-29 09:34:47.313266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:19.827 [2024-11-29 09:34:47.313356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.407 ms 00:17:19.827 [2024-11-29 09:34:47.313438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.827 [2024-11-29 09:34:47.313948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.827 [2024-11-29 09:34:47.314037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:19.827 [2024-11-29 09:34:47.314102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.445 ms 00:17:19.827 [2024-11-29 09:34:47.314125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.827 [2024-11-29 09:34:47.317371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.827 [2024-11-29 09:34:47.317449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:19.827 [2024-11-29 09:34:47.317503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.208 ms 00:17:19.827 [2024-11-29 09:34:47.317525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.827 [2024-11-29 09:34:47.323752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.827 [2024-11-29 09:34:47.323841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:19.827 [2024-11-29 09:34:47.323895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.183 ms 00:17:19.827 [2024-11-29 09:34:47.323905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.827 [2024-11-29 09:34:47.325330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.827 [2024-11-29 09:34:47.325363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:19.827 [2024-11-29 09:34:47.325376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.317 ms 00:17:19.827 [2024-11-29 09:34:47.325384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.827 [2024-11-29 09:34:47.328557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.827 [2024-11-29 09:34:47.328608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:19.827 [2024-11-29 09:34:47.328620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.117 ms 00:17:19.827 [2024-11-29 09:34:47.328629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.827 [2024-11-29 09:34:47.328800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.827 [2024-11-29 09:34:47.328809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:19.827 [2024-11-29 09:34:47.328831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:17:19.827 [2024-11-29 09:34:47.328839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.827 [2024-11-29 09:34:47.330220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.827 [2024-11-29 09:34:47.330251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:19.827 [2024-11-29 09:34:47.330262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.352 ms 00:17:19.827 [2024-11-29 09:34:47.330269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.827 [2024-11-29 09:34:47.331310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.827 [2024-11-29 09:34:47.331414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:19.827 [2024-11-29 09:34:47.331430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.998 ms 00:17:19.827 [2024-11-29 09:34:47.331436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.827 [2024-11-29 09:34:47.332259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.827 [2024-11-29 09:34:47.332287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:19.827 [2024-11-29 09:34:47.332298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.784 ms 00:17:19.827 [2024-11-29 09:34:47.332305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.827 [2024-11-29 09:34:47.333121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.827 [2024-11-29 09:34:47.333153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:19.827 [2024-11-29 09:34:47.333164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.730 ms 00:17:19.827 [2024-11-29 09:34:47.333171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.827 [2024-11-29 09:34:47.333209] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:19.827 [2024-11-29 09:34:47.333222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:19.827 [2024-11-29 09:34:47.333236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:19.827 [2024-11-29 09:34:47.333244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:19.827 [2024-11-29 09:34:47.333255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:19.827 [2024-11-29 09:34:47.333263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:19.827 [2024-11-29 09:34:47.333272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:19.827 [2024-11-29 09:34:47.333279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:19.827 [2024-11-29 09:34:47.333289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:19.827 [2024-11-29 09:34:47.333296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:19.827 [2024-11-29 09:34:47.333305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:19.827 [2024-11-29 09:34:47.333313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:19.827 [2024-11-29 09:34:47.333322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:19.827 [2024-11-29 09:34:47.333329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:19.827 [2024-11-29 09:34:47.333337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:19.827 [2024-11-29 09:34:47.333345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:19.827 [2024-11-29 09:34:47.333354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:19.827 [2024-11-29 09:34:47.333361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:19.827 [2024-11-29 09:34:47.333370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:19.827 [2024-11-29 09:34:47.333377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:19.827 [2024-11-29 09:34:47.333387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:19.827 [2024-11-29 09:34:47.333394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:19.827 [2024-11-29 09:34:47.333411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:19.827 [2024-11-29 09:34:47.333419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:19.827 [2024-11-29 09:34:47.333429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:19.827 [2024-11-29 09:34:47.333436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:19.827 [2024-11-29 09:34:47.333445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:19.827 [2024-11-29 09:34:47.333453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:19.827 [2024-11-29 09:34:47.333462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:19.827 [2024-11-29 09:34:47.333469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.333478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.333485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.333494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.333501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.333513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.333521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.333532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.333539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.333547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.333555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.333564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.333571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.333579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.333606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.333615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.333623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.333632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.333639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.333649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.333658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.333673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.333680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.333692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.333699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.333708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.333715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.333723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.333730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.333739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.333747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.333755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.333763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.333771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.333779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.333787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.333794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.333806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.333814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.333824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.333832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.333841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.333848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.333857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.333864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.333873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.333880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.333889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.333896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.333906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.333913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.333922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.333929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.333937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.333944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.333955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.333962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.333970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.333978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.333999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.334006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.334016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.334023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.334031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.334038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.334047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.334053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.334062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.334069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.334079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.334087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.334098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:19.828 [2024-11-29 09:34:47.334114] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:19.828 [2024-11-29 09:34:47.334125] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 700c8d61-4e0d-4599-a146-cab694a5ba02 00:17:19.828 [2024-11-29 09:34:47.334132] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:19.828 [2024-11-29 09:34:47.334141] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:19.828 [2024-11-29 09:34:47.334148] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:19.828 [2024-11-29 09:34:47.334157] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:19.828 [2024-11-29 09:34:47.334164] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:19.828 [2024-11-29 09:34:47.334173] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:19.828 [2024-11-29 09:34:47.334180] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:19.828 [2024-11-29 09:34:47.334188] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:19.828 [2024-11-29 09:34:47.334194] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:19.828 [2024-11-29 09:34:47.334202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.828 [2024-11-29 09:34:47.334209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:19.828 [2024-11-29 09:34:47.334218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.996 ms 00:17:19.828 [2024-11-29 09:34:47.334227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.828 [2024-11-29 09:34:47.335742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.828 [2024-11-29 09:34:47.335763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:19.828 [2024-11-29 09:34:47.335773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.474 ms 00:17:19.828 [2024-11-29 09:34:47.335780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.828 [2024-11-29 09:34:47.335861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.828 [2024-11-29 09:34:47.335870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:19.828 [2024-11-29 09:34:47.335883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:17:19.828 [2024-11-29 09:34:47.335891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.828 [2024-11-29 09:34:47.341233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.829 [2024-11-29 09:34:47.341264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:19.829 [2024-11-29 09:34:47.341274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.829 [2024-11-29 09:34:47.341282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.829 [2024-11-29 09:34:47.341342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.829 [2024-11-29 09:34:47.341350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:19.829 [2024-11-29 09:34:47.341361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.829 [2024-11-29 09:34:47.341368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.829 [2024-11-29 09:34:47.341467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.829 [2024-11-29 09:34:47.341477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:19.829 [2024-11-29 09:34:47.341498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.829 [2024-11-29 09:34:47.341505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.829 [2024-11-29 09:34:47.341544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.829 [2024-11-29 09:34:47.341552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:19.829 [2024-11-29 09:34:47.341562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.829 [2024-11-29 09:34:47.341571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.829 [2024-11-29 09:34:47.350906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.829 [2024-11-29 09:34:47.350944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:19.829 [2024-11-29 09:34:47.350965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.829 [2024-11-29 09:34:47.350973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.829 [2024-11-29 09:34:47.358828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.829 [2024-11-29 09:34:47.358863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:19.829 [2024-11-29 09:34:47.358877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.829 [2024-11-29 09:34:47.358885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.829 [2024-11-29 09:34:47.358938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.829 [2024-11-29 09:34:47.358947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:19.829 [2024-11-29 09:34:47.358956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.829 [2024-11-29 09:34:47.358963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.829 [2024-11-29 09:34:47.359031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.829 [2024-11-29 09:34:47.359040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:19.829 [2024-11-29 09:34:47.359049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.829 [2024-11-29 09:34:47.359056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.829 [2024-11-29 09:34:47.359138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.829 [2024-11-29 09:34:47.359148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:19.829 [2024-11-29 09:34:47.359157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.829 [2024-11-29 09:34:47.359164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.829 [2024-11-29 09:34:47.359209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.829 [2024-11-29 09:34:47.359218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:19.829 [2024-11-29 09:34:47.359227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.829 [2024-11-29 09:34:47.359234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.829 [2024-11-29 09:34:47.359288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.829 [2024-11-29 09:34:47.359296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:19.829 [2024-11-29 09:34:47.359306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.829 [2024-11-29 09:34:47.359313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.829 [2024-11-29 09:34:47.359369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.829 [2024-11-29 09:34:47.359378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:19.829 [2024-11-29 09:34:47.359399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.829 [2024-11-29 09:34:47.359406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.829 [2024-11-29 09:34:47.359620] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 47.174 ms, result 0 00:17:19.829 true 00:17:19.829 09:34:47 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 88036 00:17:19.829 09:34:47 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 88036 ']' 00:17:19.829 09:34:47 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 88036 00:17:19.829 09:34:47 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:17:19.829 09:34:47 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:19.829 09:34:47 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88036 00:17:19.829 killing process with pid 88036 00:17:19.829 09:34:47 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:19.829 09:34:47 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:19.829 09:34:47 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88036' 00:17:19.829 09:34:47 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 88036 00:17:19.829 09:34:47 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 88036 00:17:23.115 09:34:50 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:17:23.115 09:34:50 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:23.115 09:34:50 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:17:23.115 09:34:50 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:23.115 09:34:50 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:23.115 09:34:50 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:23.115 09:34:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:23.115 09:34:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:23.115 09:34:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:23.115 09:34:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:23.115 09:34:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:23.115 09:34:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:17:23.115 09:34:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:23.115 09:34:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:23.115 09:34:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:23.115 09:34:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:17:23.115 09:34:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:23.115 09:34:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:23.115 09:34:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:23.115 09:34:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:17:23.115 09:34:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:23.115 09:34:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:23.115 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:17:23.115 fio-3.35 00:17:23.115 Starting 1 thread 00:17:28.568 00:17:28.568 test: (groupid=0, jobs=1): err= 0: pid=88193: Fri Nov 29 09:34:55 2024 00:17:28.568 read: IOPS=856, BW=56.9MiB/s (59.6MB/s)(255MiB/4476msec) 00:17:28.568 slat (nsec): min=3971, max=20673, avg=5469.93, stdev=1883.70 00:17:28.568 clat (usec): min=285, max=1440, avg=529.68, stdev=160.52 00:17:28.568 lat (usec): min=290, max=1446, avg=535.15, stdev=160.75 00:17:28.568 clat percentiles (usec): 00:17:28.568 | 1.00th=[ 297], 5.00th=[ 302], 10.00th=[ 318], 20.00th=[ 400], 00:17:28.568 | 30.00th=[ 457], 40.00th=[ 490], 50.00th=[ 537], 60.00th=[ 545], 00:17:28.568 | 70.00th=[ 553], 80.00th=[ 611], 90.00th=[ 807], 95.00th=[ 840], 00:17:28.568 | 99.00th=[ 955], 99.50th=[ 1020], 99.90th=[ 1188], 99.95th=[ 1352], 00:17:28.568 | 99.99th=[ 1434] 00:17:28.568 write: IOPS=862, BW=57.3MiB/s (60.1MB/s)(256MiB/4469msec); 0 zone resets 00:17:28.568 slat (nsec): min=14538, max=55167, avg=19956.85, stdev=3764.14 00:17:28.568 clat (usec): min=295, max=1573, avg=597.35, stdev=182.64 00:17:28.568 lat (usec): min=320, max=1599, avg=617.31, stdev=182.46 00:17:28.568 clat percentiles (usec): 00:17:28.568 | 1.00th=[ 314], 5.00th=[ 322], 10.00th=[ 359], 20.00th=[ 486], 00:17:28.568 | 30.00th=[ 498], 40.00th=[ 553], 50.00th=[ 570], 60.00th=[ 619], 00:17:28.568 | 70.00th=[ 635], 80.00th=[ 709], 90.00th=[ 898], 95.00th=[ 922], 00:17:28.568 | 99.00th=[ 1123], 99.50th=[ 1188], 99.90th=[ 1303], 99.95th=[ 1549], 00:17:28.568 | 99.99th=[ 1582] 00:17:28.568 bw ( KiB/s): min=51000, max=83504, per=100.00%, avg=60061.00, stdev=10383.68, samples=8 00:17:28.568 iops : min= 750, max= 1228, avg=883.25, stdev=152.70, samples=8 00:17:28.568 lat (usec) : 500=36.64%, 750=47.11%, 1000=14.94% 00:17:28.568 lat (msec) : 2=1.31% 00:17:28.568 cpu : usr=99.28%, sys=0.04%, ctx=8, majf=0, minf=1181 00:17:28.568 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:28.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:28.568 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:28.568 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:28.568 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:28.568 00:17:28.568 Run status group 0 (all jobs): 00:17:28.568 READ: bw=56.9MiB/s (59.6MB/s), 56.9MiB/s-56.9MiB/s (59.6MB/s-59.6MB/s), io=255MiB (267MB), run=4476-4476msec 00:17:28.568 WRITE: bw=57.3MiB/s (60.1MB/s), 57.3MiB/s-57.3MiB/s (60.1MB/s-60.1MB/s), io=256MiB (269MB), run=4469-4469msec 00:17:29.138 ----------------------------------------------------- 00:17:29.138 Suppressions used: 00:17:29.138 count bytes template 00:17:29.138 1 5 /usr/src/fio/parse.c 00:17:29.138 1 8 libtcmalloc_minimal.so 00:17:29.138 1 904 libcrypto.so 00:17:29.138 ----------------------------------------------------- 00:17:29.138 00:17:29.138 09:34:56 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:17:29.138 09:34:56 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:29.138 09:34:56 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:29.138 09:34:56 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:29.138 09:34:56 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:17:29.138 09:34:56 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:29.138 09:34:56 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:29.138 09:34:56 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:29.138 09:34:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:29.138 09:34:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:29.138 09:34:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:29.138 09:34:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:29.138 09:34:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:29.138 09:34:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:17:29.138 09:34:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:29.138 09:34:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:29.138 09:34:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:29.138 09:34:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:17:29.138 09:34:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:29.138 09:34:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:29.138 09:34:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:29.138 09:34:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:17:29.138 09:34:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:29.138 09:34:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:29.397 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:29.397 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:29.397 fio-3.35 00:17:29.397 Starting 2 threads 00:17:56.012 00:17:56.012 first_half: (groupid=0, jobs=1): err= 0: pid=88290: Fri Nov 29 09:35:19 2024 00:17:56.012 read: IOPS=3025, BW=11.8MiB/s (12.4MB/s)(256MiB/21644msec) 00:17:56.012 slat (nsec): min=3116, max=24426, avg=4253.20, stdev=997.68 00:17:56.012 clat (msec): min=11, max=371, avg=36.11, stdev=20.79 00:17:56.012 lat (msec): min=11, max=371, avg=36.11, stdev=20.79 00:17:56.012 clat percentiles (msec): 00:17:56.012 | 1.00th=[ 27], 5.00th=[ 27], 10.00th=[ 27], 20.00th=[ 30], 00:17:56.012 | 30.00th=[ 30], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 32], 00:17:56.012 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 40], 95.00th=[ 66], 00:17:56.012 | 99.00th=[ 146], 99.50th=[ 159], 99.90th=[ 178], 99.95th=[ 243], 00:17:56.012 | 99.99th=[ 363] 00:17:56.012 write: IOPS=3044, BW=11.9MiB/s (12.5MB/s)(256MiB/21528msec); 0 zone resets 00:17:56.012 slat (usec): min=3, max=2131, avg= 5.79, stdev=12.44 00:17:56.012 clat (usec): min=330, max=33821, avg=6172.12, stdev=4610.63 00:17:56.012 lat (usec): min=337, max=33826, avg=6177.91, stdev=4611.40 00:17:56.012 clat percentiles (usec): 00:17:56.012 | 1.00th=[ 832], 5.00th=[ 1876], 10.00th=[ 2442], 20.00th=[ 3163], 00:17:56.012 | 30.00th=[ 3884], 40.00th=[ 4490], 50.00th=[ 4883], 60.00th=[ 5342], 00:17:56.012 | 70.00th=[ 5604], 80.00th=[ 6718], 90.00th=[14222], 95.00th=[16909], 00:17:56.012 | 99.00th=[21627], 99.50th=[24249], 99.90th=[27395], 99.95th=[31589], 00:17:56.012 | 99.99th=[33817] 00:17:56.012 bw ( KiB/s): min= 816, max=41896, per=99.01%, avg=23814.91, stdev=15757.16, samples=22 00:17:56.012 iops : min= 204, max=10474, avg=5953.73, stdev=3939.29, samples=22 00:17:56.012 lat (usec) : 500=0.04%, 750=0.26%, 1000=0.64% 00:17:56.012 lat (msec) : 2=1.91%, 4=13.09%, 10=26.23%, 20=7.21%, 50=47.47% 00:17:56.012 lat (msec) : 100=1.64%, 250=1.50%, 500=0.02% 00:17:56.013 cpu : usr=99.23%, sys=0.18%, ctx=45, majf=0, minf=5597 00:17:56.013 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:17:56.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:56.013 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:56.013 issued rwts: total=65481,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:56.013 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:56.013 second_half: (groupid=0, jobs=1): err= 0: pid=88291: Fri Nov 29 09:35:19 2024 00:17:56.013 read: IOPS=2997, BW=11.7MiB/s (12.3MB/s)(256MiB/21844msec) 00:17:56.013 slat (usec): min=3, max=131, avg= 5.21, stdev= 1.06 00:17:56.013 clat (usec): min=523, max=399007, avg=35762.18, stdev=23476.65 00:17:56.013 lat (usec): min=528, max=399013, avg=35767.39, stdev=23476.76 00:17:56.013 clat percentiles (msec): 00:17:56.013 | 1.00th=[ 9], 5.00th=[ 27], 10.00th=[ 27], 20.00th=[ 30], 00:17:56.013 | 30.00th=[ 30], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 32], 00:17:56.013 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 40], 95.00th=[ 71], 00:17:56.013 | 99.00th=[ 146], 99.50th=[ 155], 99.90th=[ 296], 99.95th=[ 384], 00:17:56.013 | 99.99th=[ 393] 00:17:56.013 write: IOPS=3006, BW=11.7MiB/s (12.3MB/s)(256MiB/21797msec); 0 zone resets 00:17:56.013 slat (usec): min=3, max=1946, avg= 6.57, stdev= 8.56 00:17:56.013 clat (usec): min=356, max=40963, avg=6912.89, stdev=7001.19 00:17:56.013 lat (usec): min=366, max=40969, avg=6919.46, stdev=7001.37 00:17:56.013 clat percentiles (usec): 00:17:56.013 | 1.00th=[ 725], 5.00th=[ 898], 10.00th=[ 1221], 20.00th=[ 2442], 00:17:56.013 | 30.00th=[ 3163], 40.00th=[ 3982], 50.00th=[ 4817], 60.00th=[ 5538], 00:17:56.013 | 70.00th=[ 6194], 80.00th=[ 9372], 90.00th=[16450], 95.00th=[22152], 00:17:56.013 | 99.00th=[33424], 99.50th=[34341], 99.90th=[38536], 99.95th=[39584], 00:17:56.013 | 99.99th=[40633] 00:17:56.013 bw ( KiB/s): min= 2952, max=61282, per=98.70%, avg=23740.68, stdev=16174.22, samples=22 00:17:56.013 iops : min= 738, max=15320, avg=5935.14, stdev=4043.51, samples=22 00:17:56.013 lat (usec) : 500=0.03%, 750=0.72%, 1000=2.60% 00:17:56.013 lat (msec) : 2=4.97%, 4=11.94%, 10=21.11%, 20=7.36%, 50=48.03% 00:17:56.013 lat (msec) : 100=1.54%, 250=1.65%, 500=0.06% 00:17:56.013 cpu : usr=99.32%, sys=0.15%, ctx=28, majf=0, minf=5539 00:17:56.013 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:17:56.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:56.013 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:56.013 issued rwts: total=65481,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:56.013 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:56.013 00:17:56.013 Run status group 0 (all jobs): 00:17:56.013 READ: bw=23.4MiB/s (24.6MB/s), 11.7MiB/s-11.8MiB/s (12.3MB/s-12.4MB/s), io=512MiB (536MB), run=21644-21844msec 00:17:56.013 WRITE: bw=23.5MiB/s (24.6MB/s), 11.7MiB/s-11.9MiB/s (12.3MB/s-12.5MB/s), io=512MiB (537MB), run=21528-21797msec 00:17:56.013 ----------------------------------------------------- 00:17:56.013 Suppressions used: 00:17:56.013 count bytes template 00:17:56.013 2 10 /usr/src/fio/parse.c 00:17:56.013 4 384 /usr/src/fio/iolog.c 00:17:56.013 1 8 libtcmalloc_minimal.so 00:17:56.013 1 904 libcrypto.so 00:17:56.013 ----------------------------------------------------- 00:17:56.013 00:17:56.013 09:35:21 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:17:56.013 09:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:56.013 09:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:56.013 09:35:21 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:56.013 09:35:21 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:17:56.013 09:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:56.013 09:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:56.013 09:35:21 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:56.013 09:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:56.013 09:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:56.013 09:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:56.013 09:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:56.013 09:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:56.013 09:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:17:56.013 09:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:56.013 09:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:56.013 09:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:56.013 09:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:17:56.013 09:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:56.013 09:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:56.013 09:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:56.013 09:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:17:56.013 09:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:56.013 09:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:56.013 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:56.013 fio-3.35 00:17:56.013 Starting 1 thread 00:18:08.212 00:18:08.212 test: (groupid=0, jobs=1): err= 0: pid=88582: Fri Nov 29 09:35:35 2024 00:18:08.212 read: IOPS=7809, BW=30.5MiB/s (32.0MB/s)(255MiB/8349msec) 00:18:08.212 slat (nsec): min=3038, max=22895, avg=4045.68, stdev=1124.37 00:18:08.212 clat (usec): min=529, max=37746, avg=16383.05, stdev=2258.78 00:18:08.212 lat (usec): min=535, max=37751, avg=16387.09, stdev=2259.03 00:18:08.212 clat percentiles (usec): 00:18:08.212 | 1.00th=[13304], 5.00th=[14484], 10.00th=[14615], 20.00th=[14877], 00:18:08.212 | 30.00th=[15008], 40.00th=[15270], 50.00th=[15795], 60.00th=[16319], 00:18:08.212 | 70.00th=[16712], 80.00th=[17433], 90.00th=[18744], 95.00th=[21627], 00:18:08.212 | 99.00th=[24249], 99.50th=[25560], 99.90th=[29492], 99.95th=[33817], 00:18:08.212 | 99.99th=[36963] 00:18:08.212 write: IOPS=12.9k, BW=50.5MiB/s (53.0MB/s)(256MiB/5066msec); 0 zone resets 00:18:08.212 slat (usec): min=3, max=508, avg= 5.45, stdev= 3.28 00:18:08.212 clat (usec): min=482, max=50700, avg=9847.08, stdev=10921.49 00:18:08.212 lat (usec): min=486, max=50705, avg=9852.54, stdev=10921.58 00:18:08.212 clat percentiles (usec): 00:18:08.212 | 1.00th=[ 652], 5.00th=[ 807], 10.00th=[ 922], 20.00th=[ 1090], 00:18:08.212 | 30.00th=[ 1254], 40.00th=[ 1729], 50.00th=[ 6194], 60.00th=[ 8160], 00:18:08.212 | 70.00th=[12387], 80.00th=[15270], 90.00th=[30802], 95.00th=[32637], 00:18:08.212 | 99.00th=[39060], 99.50th=[40633], 99.90th=[46400], 99.95th=[46924], 00:18:08.212 | 99.99th=[49021] 00:18:08.212 bw ( KiB/s): min= 5664, max=74464, per=92.11%, avg=47662.55, stdev=17681.63, samples=11 00:18:08.212 iops : min= 1416, max=18616, avg=11915.64, stdev=4420.41, samples=11 00:18:08.212 lat (usec) : 500=0.01%, 750=1.67%, 1000=5.36% 00:18:08.212 lat (msec) : 2=13.39%, 4=0.72%, 10=11.31%, 20=55.70%, 50=11.84% 00:18:08.212 lat (msec) : 100=0.01% 00:18:08.212 cpu : usr=99.14%, sys=0.16%, ctx=21, majf=0, minf=5577 00:18:08.212 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:08.212 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:08.212 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:08.212 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:08.212 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:08.212 00:18:08.212 Run status group 0 (all jobs): 00:18:08.212 READ: bw=30.5MiB/s (32.0MB/s), 30.5MiB/s-30.5MiB/s (32.0MB/s-32.0MB/s), io=255MiB (267MB), run=8349-8349msec 00:18:08.212 WRITE: bw=50.5MiB/s (53.0MB/s), 50.5MiB/s-50.5MiB/s (53.0MB/s-53.0MB/s), io=256MiB (268MB), run=5066-5066msec 00:18:09.154 ----------------------------------------------------- 00:18:09.154 Suppressions used: 00:18:09.154 count bytes template 00:18:09.154 1 5 /usr/src/fio/parse.c 00:18:09.154 2 192 /usr/src/fio/iolog.c 00:18:09.154 1 8 libtcmalloc_minimal.so 00:18:09.154 1 904 libcrypto.so 00:18:09.154 ----------------------------------------------------- 00:18:09.154 00:18:09.154 09:35:36 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:18:09.154 09:35:36 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:09.154 09:35:36 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:09.415 09:35:36 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:09.415 09:35:36 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:18:09.415 09:35:36 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:09.415 Remove shared memory files 00:18:09.415 09:35:36 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:18:09.415 09:35:36 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:18:09.415 09:35:36 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid70939 /dev/shm/spdk_tgt_trace.pid86973 00:18:09.415 09:35:36 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:09.415 09:35:36 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:18:09.415 00:18:09.415 real 0m56.541s 00:18:09.415 user 2m3.294s 00:18:09.415 sys 0m2.802s 00:18:09.415 09:35:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:09.415 ************************************ 00:18:09.415 END TEST ftl_fio_basic 00:18:09.415 ************************************ 00:18:09.415 09:35:36 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:09.415 09:35:36 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:09.415 09:35:36 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:09.415 09:35:36 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:09.415 09:35:36 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:09.415 ************************************ 00:18:09.415 START TEST ftl_bdevperf 00:18:09.415 ************************************ 00:18:09.415 09:35:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:09.415 * Looking for test storage... 00:18:09.415 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:09.415 09:35:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:09.415 09:35:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:09.415 09:35:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:18:09.415 09:35:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:09.415 09:35:37 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:09.415 09:35:37 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:09.415 09:35:37 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:09.415 09:35:37 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:18:09.415 09:35:37 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:18:09.415 09:35:37 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:18:09.415 09:35:37 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:18:09.415 09:35:37 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:18:09.415 09:35:37 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:18:09.415 09:35:37 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:18:09.415 09:35:37 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:09.415 09:35:37 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:18:09.415 09:35:37 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:18:09.415 09:35:37 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:09.415 09:35:37 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:09.415 09:35:37 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:18:09.415 09:35:37 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:18:09.415 09:35:37 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:09.415 09:35:37 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:18:09.415 09:35:37 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:18:09.415 09:35:37 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:18:09.415 09:35:37 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:18:09.415 09:35:37 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:09.415 09:35:37 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:18:09.415 09:35:37 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:18:09.415 09:35:37 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:09.415 09:35:37 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:09.415 09:35:37 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:18:09.415 09:35:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:09.415 09:35:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:09.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.415 --rc genhtml_branch_coverage=1 00:18:09.415 --rc genhtml_function_coverage=1 00:18:09.415 --rc genhtml_legend=1 00:18:09.415 --rc geninfo_all_blocks=1 00:18:09.415 --rc geninfo_unexecuted_blocks=1 00:18:09.415 00:18:09.415 ' 00:18:09.415 09:35:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:09.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.415 --rc genhtml_branch_coverage=1 00:18:09.416 --rc genhtml_function_coverage=1 00:18:09.416 --rc genhtml_legend=1 00:18:09.416 --rc geninfo_all_blocks=1 00:18:09.416 --rc geninfo_unexecuted_blocks=1 00:18:09.416 00:18:09.416 ' 00:18:09.416 09:35:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:09.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.416 --rc genhtml_branch_coverage=1 00:18:09.416 --rc genhtml_function_coverage=1 00:18:09.416 --rc genhtml_legend=1 00:18:09.416 --rc geninfo_all_blocks=1 00:18:09.416 --rc geninfo_unexecuted_blocks=1 00:18:09.416 00:18:09.416 ' 00:18:09.416 09:35:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:09.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.416 --rc genhtml_branch_coverage=1 00:18:09.416 --rc genhtml_function_coverage=1 00:18:09.416 --rc genhtml_legend=1 00:18:09.416 --rc geninfo_all_blocks=1 00:18:09.416 --rc geninfo_unexecuted_blocks=1 00:18:09.416 00:18:09.416 ' 00:18:09.416 09:35:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:09.416 09:35:37 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:18:09.416 09:35:37 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:09.416 09:35:37 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:09.416 09:35:37 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:09.416 09:35:37 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:09.416 09:35:37 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:09.416 09:35:37 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:09.416 09:35:37 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:09.416 09:35:37 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:09.416 09:35:37 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:09.416 09:35:37 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:09.416 09:35:37 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:09.416 09:35:37 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:09.416 09:35:37 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:09.416 09:35:37 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:09.416 09:35:37 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:09.416 09:35:37 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:09.416 09:35:37 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:09.416 09:35:37 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:09.416 09:35:37 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:09.416 09:35:37 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:09.416 09:35:37 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:09.416 09:35:37 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:09.416 09:35:37 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:09.416 09:35:37 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:09.416 09:35:37 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:09.416 09:35:37 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:09.416 09:35:37 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:09.416 09:35:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:18:09.416 09:35:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:18:09.416 09:35:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:18:09.677 09:35:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:09.677 09:35:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:18:09.677 09:35:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=88809 00:18:09.677 09:35:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:18:09.677 09:35:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 88809 00:18:09.677 09:35:37 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 88809 ']' 00:18:09.677 09:35:37 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:09.677 09:35:37 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:09.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:09.677 09:35:37 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:09.677 09:35:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:18:09.677 09:35:37 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:09.677 09:35:37 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:09.677 [2024-11-29 09:35:37.217440] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:18:09.677 [2024-11-29 09:35:37.217645] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88809 ] 00:18:09.677 [2024-11-29 09:35:37.357815] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:09.677 [2024-11-29 09:35:37.385860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.938 [2024-11-29 09:35:37.426861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.511 09:35:38 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:10.511 09:35:38 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:18:10.511 09:35:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:10.511 09:35:38 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:18:10.511 09:35:38 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:10.511 09:35:38 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:18:10.511 09:35:38 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:18:10.511 09:35:38 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:10.771 09:35:38 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:10.771 09:35:38 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:18:10.771 09:35:38 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:10.771 09:35:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:18:10.771 09:35:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:10.771 09:35:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:10.771 09:35:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:10.771 09:35:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:11.032 09:35:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:11.032 { 00:18:11.032 "name": "nvme0n1", 00:18:11.032 "aliases": [ 00:18:11.032 "b3655162-f6ba-4773-a4a0-ac9b36f04150" 00:18:11.032 ], 00:18:11.032 "product_name": "NVMe disk", 00:18:11.032 "block_size": 4096, 00:18:11.032 "num_blocks": 1310720, 00:18:11.032 "uuid": "b3655162-f6ba-4773-a4a0-ac9b36f04150", 00:18:11.032 "numa_id": -1, 00:18:11.032 "assigned_rate_limits": { 00:18:11.032 "rw_ios_per_sec": 0, 00:18:11.032 "rw_mbytes_per_sec": 0, 00:18:11.032 "r_mbytes_per_sec": 0, 00:18:11.032 "w_mbytes_per_sec": 0 00:18:11.032 }, 00:18:11.032 "claimed": true, 00:18:11.032 "claim_type": "read_many_write_one", 00:18:11.032 "zoned": false, 00:18:11.032 "supported_io_types": { 00:18:11.032 "read": true, 00:18:11.032 "write": true, 00:18:11.032 "unmap": true, 00:18:11.032 "flush": true, 00:18:11.032 "reset": true, 00:18:11.032 "nvme_admin": true, 00:18:11.032 "nvme_io": true, 00:18:11.032 "nvme_io_md": false, 00:18:11.032 "write_zeroes": true, 00:18:11.032 "zcopy": false, 00:18:11.032 "get_zone_info": false, 00:18:11.032 "zone_management": false, 00:18:11.032 "zone_append": false, 00:18:11.032 "compare": true, 00:18:11.032 "compare_and_write": false, 00:18:11.032 "abort": true, 00:18:11.032 "seek_hole": false, 00:18:11.032 "seek_data": false, 00:18:11.032 "copy": true, 00:18:11.032 "nvme_iov_md": false 00:18:11.032 }, 00:18:11.032 "driver_specific": { 00:18:11.032 "nvme": [ 00:18:11.032 { 00:18:11.032 "pci_address": "0000:00:11.0", 00:18:11.032 "trid": { 00:18:11.032 "trtype": "PCIe", 00:18:11.032 "traddr": "0000:00:11.0" 00:18:11.032 }, 00:18:11.032 "ctrlr_data": { 00:18:11.032 "cntlid": 0, 00:18:11.032 "vendor_id": "0x1b36", 00:18:11.032 "model_number": "QEMU NVMe Ctrl", 00:18:11.032 "serial_number": "12341", 00:18:11.032 "firmware_revision": "8.0.0", 00:18:11.032 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:11.032 "oacs": { 00:18:11.032 "security": 0, 00:18:11.032 "format": 1, 00:18:11.032 "firmware": 0, 00:18:11.032 "ns_manage": 1 00:18:11.032 }, 00:18:11.032 "multi_ctrlr": false, 00:18:11.032 "ana_reporting": false 00:18:11.032 }, 00:18:11.032 "vs": { 00:18:11.032 "nvme_version": "1.4" 00:18:11.032 }, 00:18:11.032 "ns_data": { 00:18:11.032 "id": 1, 00:18:11.032 "can_share": false 00:18:11.032 } 00:18:11.032 } 00:18:11.032 ], 00:18:11.032 "mp_policy": "active_passive" 00:18:11.032 } 00:18:11.032 } 00:18:11.032 ]' 00:18:11.032 09:35:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:11.032 09:35:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:11.032 09:35:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:11.032 09:35:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:18:11.032 09:35:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:18:11.032 09:35:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:18:11.032 09:35:38 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:18:11.032 09:35:38 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:11.032 09:35:38 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:18:11.032 09:35:38 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:11.032 09:35:38 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:11.294 09:35:38 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=3d0f6768-1439-4fd2-813d-8987bd8c2031 00:18:11.294 09:35:38 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:18:11.294 09:35:38 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3d0f6768-1439-4fd2-813d-8987bd8c2031 00:18:11.554 09:35:39 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:11.814 09:35:39 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=e4ef6250-9a81-417f-8fd9-4a64a879961a 00:18:11.814 09:35:39 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u e4ef6250-9a81-417f-8fd9-4a64a879961a 00:18:12.073 09:35:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=0ec4c197-62b0-45a3-a3d8-1edabc028a79 00:18:12.073 09:35:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 0ec4c197-62b0-45a3-a3d8-1edabc028a79 00:18:12.073 09:35:39 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:18:12.073 09:35:39 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:12.073 09:35:39 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=0ec4c197-62b0-45a3-a3d8-1edabc028a79 00:18:12.073 09:35:39 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:18:12.073 09:35:39 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 0ec4c197-62b0-45a3-a3d8-1edabc028a79 00:18:12.073 09:35:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=0ec4c197-62b0-45a3-a3d8-1edabc028a79 00:18:12.073 09:35:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:12.073 09:35:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:12.073 09:35:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:12.073 09:35:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0ec4c197-62b0-45a3-a3d8-1edabc028a79 00:18:12.334 09:35:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:12.334 { 00:18:12.334 "name": "0ec4c197-62b0-45a3-a3d8-1edabc028a79", 00:18:12.334 "aliases": [ 00:18:12.334 "lvs/nvme0n1p0" 00:18:12.334 ], 00:18:12.334 "product_name": "Logical Volume", 00:18:12.334 "block_size": 4096, 00:18:12.334 "num_blocks": 26476544, 00:18:12.334 "uuid": "0ec4c197-62b0-45a3-a3d8-1edabc028a79", 00:18:12.334 "assigned_rate_limits": { 00:18:12.334 "rw_ios_per_sec": 0, 00:18:12.334 "rw_mbytes_per_sec": 0, 00:18:12.334 "r_mbytes_per_sec": 0, 00:18:12.334 "w_mbytes_per_sec": 0 00:18:12.334 }, 00:18:12.334 "claimed": false, 00:18:12.334 "zoned": false, 00:18:12.334 "supported_io_types": { 00:18:12.334 "read": true, 00:18:12.334 "write": true, 00:18:12.334 "unmap": true, 00:18:12.334 "flush": false, 00:18:12.334 "reset": true, 00:18:12.334 "nvme_admin": false, 00:18:12.334 "nvme_io": false, 00:18:12.334 "nvme_io_md": false, 00:18:12.334 "write_zeroes": true, 00:18:12.334 "zcopy": false, 00:18:12.334 "get_zone_info": false, 00:18:12.334 "zone_management": false, 00:18:12.334 "zone_append": false, 00:18:12.334 "compare": false, 00:18:12.334 "compare_and_write": false, 00:18:12.334 "abort": false, 00:18:12.334 "seek_hole": true, 00:18:12.334 "seek_data": true, 00:18:12.334 "copy": false, 00:18:12.334 "nvme_iov_md": false 00:18:12.334 }, 00:18:12.334 "driver_specific": { 00:18:12.334 "lvol": { 00:18:12.334 "lvol_store_uuid": "e4ef6250-9a81-417f-8fd9-4a64a879961a", 00:18:12.334 "base_bdev": "nvme0n1", 00:18:12.334 "thin_provision": true, 00:18:12.334 "num_allocated_clusters": 0, 00:18:12.334 "snapshot": false, 00:18:12.334 "clone": false, 00:18:12.334 "esnap_clone": false 00:18:12.334 } 00:18:12.334 } 00:18:12.334 } 00:18:12.334 ]' 00:18:12.334 09:35:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:12.334 09:35:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:12.334 09:35:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:12.334 09:35:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:12.334 09:35:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:12.334 09:35:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:18:12.334 09:35:39 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:18:12.334 09:35:39 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:18:12.334 09:35:39 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:12.595 09:35:40 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:12.595 09:35:40 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:12.595 09:35:40 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 0ec4c197-62b0-45a3-a3d8-1edabc028a79 00:18:12.595 09:35:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=0ec4c197-62b0-45a3-a3d8-1edabc028a79 00:18:12.595 09:35:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:12.595 09:35:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:12.595 09:35:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:12.595 09:35:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0ec4c197-62b0-45a3-a3d8-1edabc028a79 00:18:12.854 09:35:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:12.854 { 00:18:12.854 "name": "0ec4c197-62b0-45a3-a3d8-1edabc028a79", 00:18:12.854 "aliases": [ 00:18:12.854 "lvs/nvme0n1p0" 00:18:12.854 ], 00:18:12.854 "product_name": "Logical Volume", 00:18:12.854 "block_size": 4096, 00:18:12.854 "num_blocks": 26476544, 00:18:12.854 "uuid": "0ec4c197-62b0-45a3-a3d8-1edabc028a79", 00:18:12.854 "assigned_rate_limits": { 00:18:12.854 "rw_ios_per_sec": 0, 00:18:12.854 "rw_mbytes_per_sec": 0, 00:18:12.854 "r_mbytes_per_sec": 0, 00:18:12.854 "w_mbytes_per_sec": 0 00:18:12.854 }, 00:18:12.854 "claimed": false, 00:18:12.854 "zoned": false, 00:18:12.854 "supported_io_types": { 00:18:12.854 "read": true, 00:18:12.854 "write": true, 00:18:12.854 "unmap": true, 00:18:12.854 "flush": false, 00:18:12.854 "reset": true, 00:18:12.854 "nvme_admin": false, 00:18:12.854 "nvme_io": false, 00:18:12.854 "nvme_io_md": false, 00:18:12.854 "write_zeroes": true, 00:18:12.854 "zcopy": false, 00:18:12.854 "get_zone_info": false, 00:18:12.854 "zone_management": false, 00:18:12.854 "zone_append": false, 00:18:12.854 "compare": false, 00:18:12.854 "compare_and_write": false, 00:18:12.854 "abort": false, 00:18:12.854 "seek_hole": true, 00:18:12.854 "seek_data": true, 00:18:12.854 "copy": false, 00:18:12.854 "nvme_iov_md": false 00:18:12.854 }, 00:18:12.854 "driver_specific": { 00:18:12.854 "lvol": { 00:18:12.855 "lvol_store_uuid": "e4ef6250-9a81-417f-8fd9-4a64a879961a", 00:18:12.855 "base_bdev": "nvme0n1", 00:18:12.855 "thin_provision": true, 00:18:12.855 "num_allocated_clusters": 0, 00:18:12.855 "snapshot": false, 00:18:12.855 "clone": false, 00:18:12.855 "esnap_clone": false 00:18:12.855 } 00:18:12.855 } 00:18:12.855 } 00:18:12.855 ]' 00:18:12.855 09:35:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:12.855 09:35:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:12.855 09:35:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:12.855 09:35:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:12.855 09:35:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:12.855 09:35:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:18:12.855 09:35:40 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:18:12.855 09:35:40 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:13.115 09:35:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:18:13.115 09:35:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 0ec4c197-62b0-45a3-a3d8-1edabc028a79 00:18:13.115 09:35:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=0ec4c197-62b0-45a3-a3d8-1edabc028a79 00:18:13.115 09:35:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:13.115 09:35:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:13.115 09:35:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:13.115 09:35:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0ec4c197-62b0-45a3-a3d8-1edabc028a79 00:18:13.115 09:35:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:13.115 { 00:18:13.115 "name": "0ec4c197-62b0-45a3-a3d8-1edabc028a79", 00:18:13.115 "aliases": [ 00:18:13.115 "lvs/nvme0n1p0" 00:18:13.115 ], 00:18:13.115 "product_name": "Logical Volume", 00:18:13.115 "block_size": 4096, 00:18:13.115 "num_blocks": 26476544, 00:18:13.115 "uuid": "0ec4c197-62b0-45a3-a3d8-1edabc028a79", 00:18:13.115 "assigned_rate_limits": { 00:18:13.115 "rw_ios_per_sec": 0, 00:18:13.115 "rw_mbytes_per_sec": 0, 00:18:13.115 "r_mbytes_per_sec": 0, 00:18:13.115 "w_mbytes_per_sec": 0 00:18:13.115 }, 00:18:13.115 "claimed": false, 00:18:13.115 "zoned": false, 00:18:13.115 "supported_io_types": { 00:18:13.115 "read": true, 00:18:13.115 "write": true, 00:18:13.115 "unmap": true, 00:18:13.115 "flush": false, 00:18:13.115 "reset": true, 00:18:13.115 "nvme_admin": false, 00:18:13.115 "nvme_io": false, 00:18:13.115 "nvme_io_md": false, 00:18:13.115 "write_zeroes": true, 00:18:13.115 "zcopy": false, 00:18:13.115 "get_zone_info": false, 00:18:13.115 "zone_management": false, 00:18:13.115 "zone_append": false, 00:18:13.115 "compare": false, 00:18:13.115 "compare_and_write": false, 00:18:13.115 "abort": false, 00:18:13.115 "seek_hole": true, 00:18:13.115 "seek_data": true, 00:18:13.115 "copy": false, 00:18:13.115 "nvme_iov_md": false 00:18:13.115 }, 00:18:13.115 "driver_specific": { 00:18:13.115 "lvol": { 00:18:13.115 "lvol_store_uuid": "e4ef6250-9a81-417f-8fd9-4a64a879961a", 00:18:13.115 "base_bdev": "nvme0n1", 00:18:13.115 "thin_provision": true, 00:18:13.115 "num_allocated_clusters": 0, 00:18:13.115 "snapshot": false, 00:18:13.115 "clone": false, 00:18:13.115 "esnap_clone": false 00:18:13.115 } 00:18:13.115 } 00:18:13.115 } 00:18:13.115 ]' 00:18:13.115 09:35:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:13.374 09:35:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:13.374 09:35:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:13.374 09:35:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:13.374 09:35:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:13.374 09:35:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:18:13.374 09:35:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:18:13.374 09:35:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 0ec4c197-62b0-45a3-a3d8-1edabc028a79 -c nvc0n1p0 --l2p_dram_limit 20 00:18:13.374 [2024-11-29 09:35:41.087827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.374 [2024-11-29 09:35:41.087876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:13.374 [2024-11-29 09:35:41.087888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:13.374 [2024-11-29 09:35:41.087897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.374 [2024-11-29 09:35:41.087936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.374 [2024-11-29 09:35:41.087947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:13.374 [2024-11-29 09:35:41.087957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:18:13.374 [2024-11-29 09:35:41.087967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.374 [2024-11-29 09:35:41.087986] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:13.374 [2024-11-29 09:35:41.088170] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:13.374 [2024-11-29 09:35:41.088182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.374 [2024-11-29 09:35:41.088192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:13.374 [2024-11-29 09:35:41.088198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:18:13.374 [2024-11-29 09:35:41.088205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.374 [2024-11-29 09:35:41.088226] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 63720b83-66d4-49fe-802b-d487fea7e144 00:18:13.374 [2024-11-29 09:35:41.089492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.374 [2024-11-29 09:35:41.089521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:13.374 [2024-11-29 09:35:41.089531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:18:13.374 [2024-11-29 09:35:41.089539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.374 [2024-11-29 09:35:41.096384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.374 [2024-11-29 09:35:41.096410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:13.374 [2024-11-29 09:35:41.096422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.809 ms 00:18:13.374 [2024-11-29 09:35:41.096428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.374 [2024-11-29 09:35:41.096529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.374 [2024-11-29 09:35:41.096540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:13.374 [2024-11-29 09:35:41.096549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:18:13.374 [2024-11-29 09:35:41.096556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.374 [2024-11-29 09:35:41.096600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.374 [2024-11-29 09:35:41.096608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:13.374 [2024-11-29 09:35:41.096616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:13.374 [2024-11-29 09:35:41.096621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.374 [2024-11-29 09:35:41.096639] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:13.635 [2024-11-29 09:35:41.098306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.635 [2024-11-29 09:35:41.098339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:13.635 [2024-11-29 09:35:41.098348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.673 ms 00:18:13.635 [2024-11-29 09:35:41.098359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.635 [2024-11-29 09:35:41.098385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.635 [2024-11-29 09:35:41.098395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:13.635 [2024-11-29 09:35:41.098403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:13.635 [2024-11-29 09:35:41.098412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.635 [2024-11-29 09:35:41.098429] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:13.635 [2024-11-29 09:35:41.098542] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:13.635 [2024-11-29 09:35:41.098552] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:13.635 [2024-11-29 09:35:41.098562] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:13.635 [2024-11-29 09:35:41.098570] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:13.635 [2024-11-29 09:35:41.098583] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:13.635 [2024-11-29 09:35:41.098602] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:13.635 [2024-11-29 09:35:41.098611] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:13.635 [2024-11-29 09:35:41.098619] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:13.635 [2024-11-29 09:35:41.098626] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:13.635 [2024-11-29 09:35:41.098632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.635 [2024-11-29 09:35:41.098640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:13.635 [2024-11-29 09:35:41.098646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:18:13.635 [2024-11-29 09:35:41.098657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.635 [2024-11-29 09:35:41.098725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.635 [2024-11-29 09:35:41.098734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:13.635 [2024-11-29 09:35:41.098743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:18:13.635 [2024-11-29 09:35:41.098749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.635 [2024-11-29 09:35:41.098818] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:13.635 [2024-11-29 09:35:41.098828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:13.635 [2024-11-29 09:35:41.098835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:13.635 [2024-11-29 09:35:41.098842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:13.635 [2024-11-29 09:35:41.098852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:13.635 [2024-11-29 09:35:41.098859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:13.635 [2024-11-29 09:35:41.098865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:13.635 [2024-11-29 09:35:41.098873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:13.635 [2024-11-29 09:35:41.098879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:13.635 [2024-11-29 09:35:41.098885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:13.635 [2024-11-29 09:35:41.098890] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:13.635 [2024-11-29 09:35:41.098901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:13.635 [2024-11-29 09:35:41.098906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:13.635 [2024-11-29 09:35:41.098913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:13.635 [2024-11-29 09:35:41.098918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:13.635 [2024-11-29 09:35:41.098924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:13.635 [2024-11-29 09:35:41.098929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:13.635 [2024-11-29 09:35:41.098936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:13.635 [2024-11-29 09:35:41.098941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:13.635 [2024-11-29 09:35:41.098948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:13.635 [2024-11-29 09:35:41.098953] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:13.635 [2024-11-29 09:35:41.098959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:13.635 [2024-11-29 09:35:41.098964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:13.635 [2024-11-29 09:35:41.098971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:13.635 [2024-11-29 09:35:41.098976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:13.635 [2024-11-29 09:35:41.098982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:13.635 [2024-11-29 09:35:41.098987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:13.635 [2024-11-29 09:35:41.098994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:13.635 [2024-11-29 09:35:41.099000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:13.635 [2024-11-29 09:35:41.099007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:13.635 [2024-11-29 09:35:41.099011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:13.635 [2024-11-29 09:35:41.099018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:13.636 [2024-11-29 09:35:41.099023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:13.636 [2024-11-29 09:35:41.099031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:13.636 [2024-11-29 09:35:41.099036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:13.636 [2024-11-29 09:35:41.099043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:13.636 [2024-11-29 09:35:41.099048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:13.636 [2024-11-29 09:35:41.099054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:13.636 [2024-11-29 09:35:41.099059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:13.636 [2024-11-29 09:35:41.099065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:13.636 [2024-11-29 09:35:41.099070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:13.636 [2024-11-29 09:35:41.099076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:13.636 [2024-11-29 09:35:41.099080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:13.636 [2024-11-29 09:35:41.099089] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:13.636 [2024-11-29 09:35:41.099096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:13.636 [2024-11-29 09:35:41.099103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:13.636 [2024-11-29 09:35:41.099111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:13.636 [2024-11-29 09:35:41.099118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:13.636 [2024-11-29 09:35:41.099124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:13.636 [2024-11-29 09:35:41.099130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:13.636 [2024-11-29 09:35:41.099135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:13.636 [2024-11-29 09:35:41.099143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:13.636 [2024-11-29 09:35:41.099148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:13.636 [2024-11-29 09:35:41.099158] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:13.636 [2024-11-29 09:35:41.099166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:13.636 [2024-11-29 09:35:41.099176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:13.636 [2024-11-29 09:35:41.099182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:13.636 [2024-11-29 09:35:41.099190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:13.636 [2024-11-29 09:35:41.099195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:13.636 [2024-11-29 09:35:41.099203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:13.636 [2024-11-29 09:35:41.099209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:13.636 [2024-11-29 09:35:41.099216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:13.636 [2024-11-29 09:35:41.099222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:13.636 [2024-11-29 09:35:41.099229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:13.636 [2024-11-29 09:35:41.099237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:13.636 [2024-11-29 09:35:41.099245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:13.636 [2024-11-29 09:35:41.099250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:13.636 [2024-11-29 09:35:41.099258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:13.636 [2024-11-29 09:35:41.099263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:13.636 [2024-11-29 09:35:41.099270] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:13.636 [2024-11-29 09:35:41.099277] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:13.636 [2024-11-29 09:35:41.099284] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:13.636 [2024-11-29 09:35:41.099291] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:13.636 [2024-11-29 09:35:41.099298] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:13.636 [2024-11-29 09:35:41.099304] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:13.636 [2024-11-29 09:35:41.099313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.636 [2024-11-29 09:35:41.099319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:13.636 [2024-11-29 09:35:41.099327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.541 ms 00:18:13.636 [2024-11-29 09:35:41.099332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.636 [2024-11-29 09:35:41.099358] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:13.636 [2024-11-29 09:35:41.099365] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:17.835 [2024-11-29 09:35:44.709574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.835 [2024-11-29 09:35:44.709648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:17.835 [2024-11-29 09:35:44.709669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3610.196 ms 00:18:17.835 [2024-11-29 09:35:44.709676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.835 [2024-11-29 09:35:44.719974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.835 [2024-11-29 09:35:44.720138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:17.835 [2024-11-29 09:35:44.720160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.220 ms 00:18:17.835 [2024-11-29 09:35:44.720167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.835 [2024-11-29 09:35:44.720268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.835 [2024-11-29 09:35:44.720279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:17.835 [2024-11-29 09:35:44.720288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:18:17.835 [2024-11-29 09:35:44.720295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.835 [2024-11-29 09:35:44.738594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.835 [2024-11-29 09:35:44.738634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:17.835 [2024-11-29 09:35:44.738648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.258 ms 00:18:17.835 [2024-11-29 09:35:44.738657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.835 [2024-11-29 09:35:44.738699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.835 [2024-11-29 09:35:44.738709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:17.835 [2024-11-29 09:35:44.738722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:17.835 [2024-11-29 09:35:44.738729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.835 [2024-11-29 09:35:44.739182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.835 [2024-11-29 09:35:44.739213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:17.835 [2024-11-29 09:35:44.739232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.381 ms 00:18:17.835 [2024-11-29 09:35:44.739241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.835 [2024-11-29 09:35:44.739357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.835 [2024-11-29 09:35:44.739369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:17.835 [2024-11-29 09:35:44.739379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:18:17.835 [2024-11-29 09:35:44.739386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.835 [2024-11-29 09:35:44.745904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.835 [2024-11-29 09:35:44.745935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:17.835 [2024-11-29 09:35:44.745948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.498 ms 00:18:17.835 [2024-11-29 09:35:44.745957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.835 [2024-11-29 09:35:44.755133] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:18:17.835 [2024-11-29 09:35:44.760529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.835 [2024-11-29 09:35:44.760557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:17.835 [2024-11-29 09:35:44.760566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.517 ms 00:18:17.835 [2024-11-29 09:35:44.760574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.835 [2024-11-29 09:35:44.829485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.835 [2024-11-29 09:35:44.829528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:17.835 [2024-11-29 09:35:44.829540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.882 ms 00:18:17.835 [2024-11-29 09:35:44.829551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.835 [2024-11-29 09:35:44.829708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.835 [2024-11-29 09:35:44.829720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:17.835 [2024-11-29 09:35:44.829727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:18:17.835 [2024-11-29 09:35:44.829738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.835 [2024-11-29 09:35:44.833396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.835 [2024-11-29 09:35:44.833428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:17.835 [2024-11-29 09:35:44.833441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.635 ms 00:18:17.835 [2024-11-29 09:35:44.833451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.835 [2024-11-29 09:35:44.836639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.835 [2024-11-29 09:35:44.836666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:17.835 [2024-11-29 09:35:44.836674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.153 ms 00:18:17.835 [2024-11-29 09:35:44.836681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.835 [2024-11-29 09:35:44.836931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.835 [2024-11-29 09:35:44.836943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:17.835 [2024-11-29 09:35:44.836950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.227 ms 00:18:17.835 [2024-11-29 09:35:44.836958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.835 [2024-11-29 09:35:44.869063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.835 [2024-11-29 09:35:44.869094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:17.835 [2024-11-29 09:35:44.869103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.093 ms 00:18:17.835 [2024-11-29 09:35:44.869116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.835 [2024-11-29 09:35:44.873818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.835 [2024-11-29 09:35:44.873849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:17.835 [2024-11-29 09:35:44.873856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.671 ms 00:18:17.835 [2024-11-29 09:35:44.873864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.836 [2024-11-29 09:35:44.877190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.836 [2024-11-29 09:35:44.877218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:17.836 [2024-11-29 09:35:44.877225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.301 ms 00:18:17.836 [2024-11-29 09:35:44.877232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.836 [2024-11-29 09:35:44.881395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.836 [2024-11-29 09:35:44.881426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:17.836 [2024-11-29 09:35:44.881434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.140 ms 00:18:17.836 [2024-11-29 09:35:44.881442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.836 [2024-11-29 09:35:44.881478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.836 [2024-11-29 09:35:44.881487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:17.836 [2024-11-29 09:35:44.881494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:17.836 [2024-11-29 09:35:44.881502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.836 [2024-11-29 09:35:44.881559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.836 [2024-11-29 09:35:44.881568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:17.836 [2024-11-29 09:35:44.881575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:18:17.836 [2024-11-29 09:35:44.881582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.836 [2024-11-29 09:35:44.882398] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3794.208 ms, result 0 00:18:17.836 { 00:18:17.836 "name": "ftl0", 00:18:17.836 "uuid": "63720b83-66d4-49fe-802b-d487fea7e144" 00:18:17.836 } 00:18:17.836 09:35:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:18:17.836 09:35:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:18:17.836 09:35:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:18:17.836 09:35:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:18:17.836 [2024-11-29 09:35:45.195805] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:17.836 I/O size of 69632 is greater than zero copy threshold (65536). 00:18:17.836 Zero copy mechanism will not be used. 00:18:17.836 Running I/O for 4 seconds... 00:18:19.719 727.00 IOPS, 48.28 MiB/s [2024-11-29T09:35:48.388Z] 736.00 IOPS, 48.88 MiB/s [2024-11-29T09:35:49.329Z] 743.00 IOPS, 49.34 MiB/s [2024-11-29T09:35:49.329Z] 749.50 IOPS, 49.77 MiB/s 00:18:21.603 Latency(us) 00:18:21.603 [2024-11-29T09:35:49.329Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:21.603 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:18:21.603 ftl0 : 4.00 749.51 49.77 0.00 0.00 1408.31 460.01 2520.62 00:18:21.603 [2024-11-29T09:35:49.329Z] =================================================================================================================== 00:18:21.603 [2024-11-29T09:35:49.329Z] Total : 749.51 49.77 0.00 0.00 1408.31 460.01 2520.62 00:18:21.603 { 00:18:21.603 "results": [ 00:18:21.603 { 00:18:21.603 "job": "ftl0", 00:18:21.603 "core_mask": "0x1", 00:18:21.603 "workload": "randwrite", 00:18:21.603 "status": "finished", 00:18:21.603 "queue_depth": 1, 00:18:21.603 "io_size": 69632, 00:18:21.603 "runtime": 4.001289, 00:18:21.603 "iops": 749.508470895254, 00:18:21.603 "mibps": 49.772046895387966, 00:18:21.603 "io_failed": 0, 00:18:21.603 "io_timeout": 0, 00:18:21.603 "avg_latency_us": 1408.3087470182368, 00:18:21.603 "min_latency_us": 460.0123076923077, 00:18:21.603 "max_latency_us": 2520.6153846153848 00:18:21.603 } 00:18:21.603 ], 00:18:21.603 "core_count": 1 00:18:21.603 } 00:18:21.603 [2024-11-29 09:35:49.202239] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:21.603 09:35:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:18:21.603 [2024-11-29 09:35:49.302804] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:21.603 Running I/O for 4 seconds... 00:18:23.936 6106.00 IOPS, 23.85 MiB/s [2024-11-29T09:35:52.612Z] 5176.50 IOPS, 20.22 MiB/s [2024-11-29T09:35:53.556Z] 4919.33 IOPS, 19.22 MiB/s [2024-11-29T09:35:53.556Z] 4882.50 IOPS, 19.07 MiB/s 00:18:25.831 Latency(us) 00:18:25.831 [2024-11-29T09:35:53.557Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.831 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:18:25.831 ftl0 : 4.03 4877.43 19.05 0.00 0.00 26149.37 551.38 50613.96 00:18:25.831 [2024-11-29T09:35:53.557Z] =================================================================================================================== 00:18:25.831 [2024-11-29T09:35:53.557Z] Total : 4877.43 19.05 0.00 0.00 26149.37 0.00 50613.96 00:18:25.831 { 00:18:25.831 "results": [ 00:18:25.831 { 00:18:25.831 "job": "ftl0", 00:18:25.831 "core_mask": "0x1", 00:18:25.831 "workload": "randwrite", 00:18:25.831 "status": "finished", 00:18:25.831 "queue_depth": 128, 00:18:25.831 "io_size": 4096, 00:18:25.831 "runtime": 4.0304, 00:18:25.831 "iops": 4877.431520444621, 00:18:25.831 "mibps": 19.0524668767368, 00:18:25.831 "io_failed": 0, 00:18:25.831 "io_timeout": 0, 00:18:25.831 "avg_latency_us": 26149.366860076538, 00:18:25.831 "min_latency_us": 551.3846153846154, 00:18:25.831 "max_latency_us": 50613.95692307693 00:18:25.831 } 00:18:25.831 ], 00:18:25.831 "core_count": 1 00:18:25.831 } 00:18:25.831 [2024-11-29 09:35:53.340033] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:25.831 09:35:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:18:25.831 [2024-11-29 09:35:53.442615] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:25.831 Running I/O for 4 seconds... 00:18:28.159 4797.00 IOPS, 18.74 MiB/s [2024-11-29T09:35:56.457Z] 4852.50 IOPS, 18.96 MiB/s [2024-11-29T09:35:57.467Z] 5043.33 IOPS, 19.70 MiB/s [2024-11-29T09:35:57.467Z] 5093.00 IOPS, 19.89 MiB/s 00:18:29.741 Latency(us) 00:18:29.741 [2024-11-29T09:35:57.467Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:29.741 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:29.741 Verification LBA range: start 0x0 length 0x1400000 00:18:29.741 ftl0 : 4.01 5106.79 19.95 0.00 0.00 24991.25 296.17 90338.86 00:18:29.741 [2024-11-29T09:35:57.467Z] =================================================================================================================== 00:18:29.741 [2024-11-29T09:35:57.467Z] Total : 5106.79 19.95 0.00 0.00 24991.25 0.00 90338.86 00:18:30.003 [2024-11-29 09:35:57.464262] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:30.003 { 00:18:30.003 "results": [ 00:18:30.003 { 00:18:30.003 "job": "ftl0", 00:18:30.003 "core_mask": "0x1", 00:18:30.003 "workload": "verify", 00:18:30.003 "status": "finished", 00:18:30.003 "verify_range": { 00:18:30.003 "start": 0, 00:18:30.003 "length": 20971520 00:18:30.003 }, 00:18:30.003 "queue_depth": 128, 00:18:30.003 "io_size": 4096, 00:18:30.003 "runtime": 4.014261, 00:18:30.003 "iops": 5106.793006234522, 00:18:30.003 "mibps": 19.948410180603602, 00:18:30.003 "io_failed": 0, 00:18:30.003 "io_timeout": 0, 00:18:30.003 "avg_latency_us": 24991.251673095685, 00:18:30.003 "min_latency_us": 296.1723076923077, 00:18:30.003 "max_latency_us": 90338.85538461538 00:18:30.003 } 00:18:30.003 ], 00:18:30.003 "core_count": 1 00:18:30.003 } 00:18:30.003 09:35:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:18:30.003 [2024-11-29 09:35:57.666189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.003 [2024-11-29 09:35:57.666373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:30.003 [2024-11-29 09:35:57.666393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:30.003 [2024-11-29 09:35:57.666407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.003 [2024-11-29 09:35:57.666433] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:30.003 [2024-11-29 09:35:57.666911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.003 [2024-11-29 09:35:57.666928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:30.003 [2024-11-29 09:35:57.666939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.460 ms 00:18:30.003 [2024-11-29 09:35:57.666947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.003 [2024-11-29 09:35:57.669525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.003 [2024-11-29 09:35:57.669594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:30.003 [2024-11-29 09:35:57.669616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.553 ms 00:18:30.003 [2024-11-29 09:35:57.669624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.266 [2024-11-29 09:35:57.867358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.266 [2024-11-29 09:35:57.867407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:30.266 [2024-11-29 09:35:57.867422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 197.712 ms 00:18:30.266 [2024-11-29 09:35:57.867430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.266 [2024-11-29 09:35:57.873701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.266 [2024-11-29 09:35:57.873726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:30.266 [2024-11-29 09:35:57.873738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.235 ms 00:18:30.266 [2024-11-29 09:35:57.873746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.266 [2024-11-29 09:35:57.876070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.266 [2024-11-29 09:35:57.876101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:30.266 [2024-11-29 09:35:57.876113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.260 ms 00:18:30.266 [2024-11-29 09:35:57.876120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.266 [2024-11-29 09:35:57.880809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.266 [2024-11-29 09:35:57.880843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:30.266 [2024-11-29 09:35:57.880857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.656 ms 00:18:30.266 [2024-11-29 09:35:57.880864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.266 [2024-11-29 09:35:57.880971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.266 [2024-11-29 09:35:57.880981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:30.266 [2024-11-29 09:35:57.880991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:18:30.266 [2024-11-29 09:35:57.880998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.266 [2024-11-29 09:35:57.883522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.266 [2024-11-29 09:35:57.883664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:30.266 [2024-11-29 09:35:57.883682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.501 ms 00:18:30.266 [2024-11-29 09:35:57.883689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.266 [2024-11-29 09:35:57.886056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.266 [2024-11-29 09:35:57.886087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:30.266 [2024-11-29 09:35:57.886098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.115 ms 00:18:30.266 [2024-11-29 09:35:57.886105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.266 [2024-11-29 09:35:57.887898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.266 [2024-11-29 09:35:57.887929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:30.266 [2024-11-29 09:35:57.887942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.712 ms 00:18:30.266 [2024-11-29 09:35:57.887949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.266 [2024-11-29 09:35:57.889615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.266 [2024-11-29 09:35:57.889644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:30.266 [2024-11-29 09:35:57.889654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.614 ms 00:18:30.266 [2024-11-29 09:35:57.889661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.266 [2024-11-29 09:35:57.889690] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:30.266 [2024-11-29 09:35:57.889704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:30.266 [2024-11-29 09:35:57.889715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:30.266 [2024-11-29 09:35:57.889723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:30.266 [2024-11-29 09:35:57.889732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:30.266 [2024-11-29 09:35:57.889740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:30.266 [2024-11-29 09:35:57.889748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:30.266 [2024-11-29 09:35:57.889756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:30.266 [2024-11-29 09:35:57.889766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:30.266 [2024-11-29 09:35:57.889774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:30.266 [2024-11-29 09:35:57.889785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:30.266 [2024-11-29 09:35:57.889793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:30.266 [2024-11-29 09:35:57.889802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:30.266 [2024-11-29 09:35:57.889809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:30.266 [2024-11-29 09:35:57.889818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:30.266 [2024-11-29 09:35:57.889825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:30.266 [2024-11-29 09:35:57.889834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:30.266 [2024-11-29 09:35:57.889841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:30.266 [2024-11-29 09:35:57.889849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:30.266 [2024-11-29 09:35:57.889856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:30.266 [2024-11-29 09:35:57.889865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:30.266 [2024-11-29 09:35:57.889873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:30.266 [2024-11-29 09:35:57.889881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:30.266 [2024-11-29 09:35:57.889889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:30.266 [2024-11-29 09:35:57.889898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:30.266 [2024-11-29 09:35:57.889905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:30.266 [2024-11-29 09:35:57.889915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:30.266 [2024-11-29 09:35:57.889924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:30.266 [2024-11-29 09:35:57.889932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:30.266 [2024-11-29 09:35:57.889940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:30.266 [2024-11-29 09:35:57.889949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:30.266 [2024-11-29 09:35:57.889956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:30.266 [2024-11-29 09:35:57.889967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:30.266 [2024-11-29 09:35:57.889975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:30.266 [2024-11-29 09:35:57.889985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:30.266 [2024-11-29 09:35:57.889992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:30.266 [2024-11-29 09:35:57.890001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:30.266 [2024-11-29 09:35:57.890008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:30.266 [2024-11-29 09:35:57.890017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:30.267 [2024-11-29 09:35:57.890557] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:30.267 [2024-11-29 09:35:57.890566] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 63720b83-66d4-49fe-802b-d487fea7e144 00:18:30.267 [2024-11-29 09:35:57.890574] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:30.267 [2024-11-29 09:35:57.890582] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:30.267 [2024-11-29 09:35:57.890606] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:30.267 [2024-11-29 09:35:57.890617] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:30.267 [2024-11-29 09:35:57.890624] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:30.267 [2024-11-29 09:35:57.890633] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:30.267 [2024-11-29 09:35:57.890642] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:30.267 [2024-11-29 09:35:57.890652] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:30.267 [2024-11-29 09:35:57.890658] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:30.267 [2024-11-29 09:35:57.890677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.267 [2024-11-29 09:35:57.890685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:30.267 [2024-11-29 09:35:57.890704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.988 ms 00:18:30.267 [2024-11-29 09:35:57.890712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.267 [2024-11-29 09:35:57.892228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.267 [2024-11-29 09:35:57.892248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:30.267 [2024-11-29 09:35:57.892258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.498 ms 00:18:30.267 [2024-11-29 09:35:57.892265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.267 [2024-11-29 09:35:57.892362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.267 [2024-11-29 09:35:57.892371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:30.267 [2024-11-29 09:35:57.892383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:18:30.267 [2024-11-29 09:35:57.892390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.267 [2024-11-29 09:35:57.897740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.267 [2024-11-29 09:35:57.897846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:30.267 [2024-11-29 09:35:57.897898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.267 [2024-11-29 09:35:57.897921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.267 [2024-11-29 09:35:57.897990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.267 [2024-11-29 09:35:57.898016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:30.267 [2024-11-29 09:35:57.898036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.267 [2024-11-29 09:35:57.898054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.267 [2024-11-29 09:35:57.898129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.267 [2024-11-29 09:35:57.898221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:30.267 [2024-11-29 09:35:57.898247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.267 [2024-11-29 09:35:57.898267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.268 [2024-11-29 09:35:57.898297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.268 [2024-11-29 09:35:57.898320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:30.268 [2024-11-29 09:35:57.898342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.268 [2024-11-29 09:35:57.898361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.268 [2024-11-29 09:35:57.907759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.268 [2024-11-29 09:35:57.907893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:30.268 [2024-11-29 09:35:57.907946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.268 [2024-11-29 09:35:57.907968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.268 [2024-11-29 09:35:57.916313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.268 [2024-11-29 09:35:57.916430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:30.268 [2024-11-29 09:35:57.916480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.268 [2024-11-29 09:35:57.916502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.268 [2024-11-29 09:35:57.916582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.268 [2024-11-29 09:35:57.916620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:30.268 [2024-11-29 09:35:57.916642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.268 [2024-11-29 09:35:57.916661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.268 [2024-11-29 09:35:57.916706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.268 [2024-11-29 09:35:57.916729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:30.268 [2024-11-29 09:35:57.916806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.268 [2024-11-29 09:35:57.916833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.268 [2024-11-29 09:35:57.916917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.268 [2024-11-29 09:35:57.916945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:30.268 [2024-11-29 09:35:57.917016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.268 [2024-11-29 09:35:57.917038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.268 [2024-11-29 09:35:57.917104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.268 [2024-11-29 09:35:57.917128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:30.268 [2024-11-29 09:35:57.917149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.268 [2024-11-29 09:35:57.917206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.268 [2024-11-29 09:35:57.917291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.268 [2024-11-29 09:35:57.917325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:30.268 [2024-11-29 09:35:57.917370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.268 [2024-11-29 09:35:57.917712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.268 [2024-11-29 09:35:57.917778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.268 [2024-11-29 09:35:57.918221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:30.268 [2024-11-29 09:35:57.918278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.268 [2024-11-29 09:35:57.918632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.268 [2024-11-29 09:35:57.918854] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 252.610 ms, result 0 00:18:30.268 true 00:18:30.268 09:35:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 88809 00:18:30.268 09:35:57 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 88809 ']' 00:18:30.268 09:35:57 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 88809 00:18:30.268 09:35:57 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:18:30.268 09:35:57 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:30.268 09:35:57 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88809 00:18:30.268 killing process with pid 88809 00:18:30.268 Received shutdown signal, test time was about 4.000000 seconds 00:18:30.268 00:18:30.268 Latency(us) 00:18:30.268 [2024-11-29T09:35:57.994Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.268 [2024-11-29T09:35:57.994Z] =================================================================================================================== 00:18:30.268 [2024-11-29T09:35:57.994Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:30.268 09:35:57 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:30.268 09:35:57 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:30.268 09:35:57 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88809' 00:18:30.268 09:35:57 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 88809 00:18:30.268 09:35:57 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 88809 00:18:30.530 Remove shared memory files 00:18:30.530 09:35:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:18:30.530 09:35:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:18:30.530 09:35:58 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:30.530 09:35:58 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:18:30.530 09:35:58 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:18:30.530 09:35:58 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:18:30.530 09:35:58 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:30.530 09:35:58 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:18:30.530 00:18:30.530 real 0m21.261s 00:18:30.530 user 0m23.918s 00:18:30.530 sys 0m0.974s 00:18:30.530 09:35:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:30.530 ************************************ 00:18:30.530 09:35:58 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:30.530 END TEST ftl_bdevperf 00:18:30.530 ************************************ 00:18:30.792 09:35:58 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:18:30.792 09:35:58 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:30.792 09:35:58 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:30.792 09:35:58 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:30.792 ************************************ 00:18:30.792 START TEST ftl_trim 00:18:30.792 ************************************ 00:18:30.792 09:35:58 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:18:30.792 * Looking for test storage... 00:18:30.792 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:30.792 09:35:58 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:30.792 09:35:58 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:18:30.792 09:35:58 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:30.792 09:35:58 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:30.792 09:35:58 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:30.792 09:35:58 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:30.792 09:35:58 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:30.792 09:35:58 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:18:30.792 09:35:58 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:18:30.792 09:35:58 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:18:30.792 09:35:58 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:18:30.792 09:35:58 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:18:30.792 09:35:58 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:18:30.792 09:35:58 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:18:30.792 09:35:58 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:30.792 09:35:58 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:18:30.792 09:35:58 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:18:30.792 09:35:58 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:30.792 09:35:58 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:30.792 09:35:58 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:18:30.792 09:35:58 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:18:30.792 09:35:58 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:30.792 09:35:58 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:18:30.792 09:35:58 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:18:30.792 09:35:58 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:18:30.792 09:35:58 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:18:30.792 09:35:58 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:30.792 09:35:58 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:18:30.792 09:35:58 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:18:30.792 09:35:58 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:30.792 09:35:58 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:30.792 09:35:58 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:18:30.792 09:35:58 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:30.792 09:35:58 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:30.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:30.792 --rc genhtml_branch_coverage=1 00:18:30.792 --rc genhtml_function_coverage=1 00:18:30.792 --rc genhtml_legend=1 00:18:30.792 --rc geninfo_all_blocks=1 00:18:30.792 --rc geninfo_unexecuted_blocks=1 00:18:30.792 00:18:30.792 ' 00:18:30.792 09:35:58 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:30.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:30.792 --rc genhtml_branch_coverage=1 00:18:30.792 --rc genhtml_function_coverage=1 00:18:30.792 --rc genhtml_legend=1 00:18:30.792 --rc geninfo_all_blocks=1 00:18:30.792 --rc geninfo_unexecuted_blocks=1 00:18:30.792 00:18:30.792 ' 00:18:30.792 09:35:58 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:30.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:30.792 --rc genhtml_branch_coverage=1 00:18:30.792 --rc genhtml_function_coverage=1 00:18:30.792 --rc genhtml_legend=1 00:18:30.792 --rc geninfo_all_blocks=1 00:18:30.792 --rc geninfo_unexecuted_blocks=1 00:18:30.792 00:18:30.792 ' 00:18:30.792 09:35:58 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:30.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:30.792 --rc genhtml_branch_coverage=1 00:18:30.792 --rc genhtml_function_coverage=1 00:18:30.792 --rc genhtml_legend=1 00:18:30.792 --rc geninfo_all_blocks=1 00:18:30.792 --rc geninfo_unexecuted_blocks=1 00:18:30.792 00:18:30.792 ' 00:18:30.792 09:35:58 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:30.792 09:35:58 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:18:30.792 09:35:58 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:30.792 09:35:58 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:30.792 09:35:58 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:30.792 09:35:58 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:30.792 09:35:58 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:30.792 09:35:58 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:30.792 09:35:58 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:30.792 09:35:58 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:30.792 09:35:58 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:30.792 09:35:58 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:30.792 09:35:58 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:30.792 09:35:58 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:30.792 09:35:58 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:30.792 09:35:58 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:30.793 09:35:58 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:30.793 09:35:58 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:30.793 09:35:58 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:30.793 09:35:58 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:30.793 09:35:58 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:30.793 09:35:58 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:30.793 09:35:58 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:30.793 09:35:58 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:30.793 09:35:58 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:30.793 09:35:58 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:30.793 09:35:58 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:30.793 09:35:58 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:30.793 09:35:58 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:30.793 09:35:58 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:30.793 09:35:58 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:18:30.793 09:35:58 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:18:30.793 09:35:58 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:18:30.793 09:35:58 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:18:30.793 09:35:58 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:18:30.793 09:35:58 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:18:30.793 09:35:58 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:18:30.793 09:35:58 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:18:30.793 09:35:58 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:30.793 09:35:58 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:30.793 09:35:58 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:18:30.793 09:35:58 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=89152 00:18:30.793 09:35:58 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 89152 00:18:30.793 09:35:58 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 89152 ']' 00:18:30.793 09:35:58 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:30.793 09:35:58 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:30.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:30.793 09:35:58 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:18:30.793 09:35:58 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:30.793 09:35:58 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:30.793 09:35:58 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:31.054 [2024-11-29 09:35:58.560552] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:18:31.055 [2024-11-29 09:35:58.560978] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89152 ] 00:18:31.055 [2024-11-29 09:35:58.699511] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:31.055 [2024-11-29 09:35:58.725893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:31.055 [2024-11-29 09:35:58.757282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:31.055 [2024-11-29 09:35:58.757677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:31.055 [2024-11-29 09:35:58.757741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:32.003 09:35:59 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:32.003 09:35:59 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:18:32.003 09:35:59 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:32.003 09:35:59 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:18:32.003 09:35:59 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:32.003 09:35:59 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:18:32.003 09:35:59 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:18:32.003 09:35:59 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:32.003 09:35:59 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:32.003 09:35:59 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:18:32.003 09:35:59 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:32.003 09:35:59 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:18:32.003 09:35:59 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:32.003 09:35:59 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:18:32.003 09:35:59 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:18:32.003 09:35:59 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:32.264 09:35:59 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:32.264 { 00:18:32.264 "name": "nvme0n1", 00:18:32.264 "aliases": [ 00:18:32.264 "6b29a973-c435-41a7-b23e-cd0aad20d208" 00:18:32.264 ], 00:18:32.264 "product_name": "NVMe disk", 00:18:32.264 "block_size": 4096, 00:18:32.264 "num_blocks": 1310720, 00:18:32.264 "uuid": "6b29a973-c435-41a7-b23e-cd0aad20d208", 00:18:32.264 "numa_id": -1, 00:18:32.264 "assigned_rate_limits": { 00:18:32.264 "rw_ios_per_sec": 0, 00:18:32.264 "rw_mbytes_per_sec": 0, 00:18:32.264 "r_mbytes_per_sec": 0, 00:18:32.264 "w_mbytes_per_sec": 0 00:18:32.264 }, 00:18:32.264 "claimed": true, 00:18:32.264 "claim_type": "read_many_write_one", 00:18:32.264 "zoned": false, 00:18:32.264 "supported_io_types": { 00:18:32.264 "read": true, 00:18:32.264 "write": true, 00:18:32.264 "unmap": true, 00:18:32.264 "flush": true, 00:18:32.264 "reset": true, 00:18:32.264 "nvme_admin": true, 00:18:32.264 "nvme_io": true, 00:18:32.264 "nvme_io_md": false, 00:18:32.264 "write_zeroes": true, 00:18:32.264 "zcopy": false, 00:18:32.264 "get_zone_info": false, 00:18:32.264 "zone_management": false, 00:18:32.264 "zone_append": false, 00:18:32.264 "compare": true, 00:18:32.264 "compare_and_write": false, 00:18:32.264 "abort": true, 00:18:32.264 "seek_hole": false, 00:18:32.264 "seek_data": false, 00:18:32.264 "copy": true, 00:18:32.264 "nvme_iov_md": false 00:18:32.264 }, 00:18:32.264 "driver_specific": { 00:18:32.264 "nvme": [ 00:18:32.264 { 00:18:32.264 "pci_address": "0000:00:11.0", 00:18:32.264 "trid": { 00:18:32.264 "trtype": "PCIe", 00:18:32.264 "traddr": "0000:00:11.0" 00:18:32.264 }, 00:18:32.264 "ctrlr_data": { 00:18:32.264 "cntlid": 0, 00:18:32.264 "vendor_id": "0x1b36", 00:18:32.264 "model_number": "QEMU NVMe Ctrl", 00:18:32.264 "serial_number": "12341", 00:18:32.264 "firmware_revision": "8.0.0", 00:18:32.264 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:32.264 "oacs": { 00:18:32.264 "security": 0, 00:18:32.264 "format": 1, 00:18:32.264 "firmware": 0, 00:18:32.264 "ns_manage": 1 00:18:32.264 }, 00:18:32.264 "multi_ctrlr": false, 00:18:32.264 "ana_reporting": false 00:18:32.264 }, 00:18:32.264 "vs": { 00:18:32.264 "nvme_version": "1.4" 00:18:32.264 }, 00:18:32.264 "ns_data": { 00:18:32.264 "id": 1, 00:18:32.264 "can_share": false 00:18:32.264 } 00:18:32.264 } 00:18:32.264 ], 00:18:32.264 "mp_policy": "active_passive" 00:18:32.264 } 00:18:32.264 } 00:18:32.264 ]' 00:18:32.264 09:35:59 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:32.264 09:35:59 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:18:32.264 09:35:59 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:32.264 09:35:59 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:18:32.264 09:35:59 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:18:32.264 09:35:59 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:18:32.264 09:35:59 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:18:32.264 09:35:59 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:32.264 09:35:59 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:18:32.264 09:35:59 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:32.264 09:35:59 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:32.526 09:36:00 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=e4ef6250-9a81-417f-8fd9-4a64a879961a 00:18:32.526 09:36:00 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:18:32.526 09:36:00 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e4ef6250-9a81-417f-8fd9-4a64a879961a 00:18:32.788 09:36:00 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:33.049 09:36:00 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=11c6ec6a-00d2-4cb7-879f-0fd44f724fae 00:18:33.049 09:36:00 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 11c6ec6a-00d2-4cb7-879f-0fd44f724fae 00:18:33.311 09:36:00 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=9f2521e6-c43c-40a5-9853-0631c1736b03 00:18:33.311 09:36:00 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 9f2521e6-c43c-40a5-9853-0631c1736b03 00:18:33.311 09:36:00 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:18:33.311 09:36:00 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:33.311 09:36:00 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=9f2521e6-c43c-40a5-9853-0631c1736b03 00:18:33.311 09:36:00 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:18:33.311 09:36:00 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 9f2521e6-c43c-40a5-9853-0631c1736b03 00:18:33.311 09:36:00 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=9f2521e6-c43c-40a5-9853-0631c1736b03 00:18:33.311 09:36:00 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:33.311 09:36:00 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:18:33.311 09:36:00 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:18:33.311 09:36:00 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9f2521e6-c43c-40a5-9853-0631c1736b03 00:18:33.572 09:36:01 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:33.572 { 00:18:33.572 "name": "9f2521e6-c43c-40a5-9853-0631c1736b03", 00:18:33.572 "aliases": [ 00:18:33.572 "lvs/nvme0n1p0" 00:18:33.572 ], 00:18:33.572 "product_name": "Logical Volume", 00:18:33.572 "block_size": 4096, 00:18:33.572 "num_blocks": 26476544, 00:18:33.572 "uuid": "9f2521e6-c43c-40a5-9853-0631c1736b03", 00:18:33.572 "assigned_rate_limits": { 00:18:33.572 "rw_ios_per_sec": 0, 00:18:33.572 "rw_mbytes_per_sec": 0, 00:18:33.572 "r_mbytes_per_sec": 0, 00:18:33.572 "w_mbytes_per_sec": 0 00:18:33.572 }, 00:18:33.572 "claimed": false, 00:18:33.572 "zoned": false, 00:18:33.572 "supported_io_types": { 00:18:33.572 "read": true, 00:18:33.572 "write": true, 00:18:33.572 "unmap": true, 00:18:33.572 "flush": false, 00:18:33.572 "reset": true, 00:18:33.572 "nvme_admin": false, 00:18:33.572 "nvme_io": false, 00:18:33.573 "nvme_io_md": false, 00:18:33.573 "write_zeroes": true, 00:18:33.573 "zcopy": false, 00:18:33.573 "get_zone_info": false, 00:18:33.573 "zone_management": false, 00:18:33.573 "zone_append": false, 00:18:33.573 "compare": false, 00:18:33.573 "compare_and_write": false, 00:18:33.573 "abort": false, 00:18:33.573 "seek_hole": true, 00:18:33.573 "seek_data": true, 00:18:33.573 "copy": false, 00:18:33.573 "nvme_iov_md": false 00:18:33.573 }, 00:18:33.573 "driver_specific": { 00:18:33.573 "lvol": { 00:18:33.573 "lvol_store_uuid": "11c6ec6a-00d2-4cb7-879f-0fd44f724fae", 00:18:33.573 "base_bdev": "nvme0n1", 00:18:33.573 "thin_provision": true, 00:18:33.573 "num_allocated_clusters": 0, 00:18:33.573 "snapshot": false, 00:18:33.573 "clone": false, 00:18:33.573 "esnap_clone": false 00:18:33.573 } 00:18:33.573 } 00:18:33.573 } 00:18:33.573 ]' 00:18:33.573 09:36:01 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:33.573 09:36:01 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:18:33.573 09:36:01 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:33.573 09:36:01 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:33.573 09:36:01 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:33.573 09:36:01 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:18:33.573 09:36:01 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:18:33.573 09:36:01 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:18:33.573 09:36:01 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:33.834 09:36:01 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:33.834 09:36:01 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:33.834 09:36:01 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 9f2521e6-c43c-40a5-9853-0631c1736b03 00:18:33.834 09:36:01 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=9f2521e6-c43c-40a5-9853-0631c1736b03 00:18:33.834 09:36:01 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:33.834 09:36:01 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:18:33.834 09:36:01 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:18:33.834 09:36:01 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9f2521e6-c43c-40a5-9853-0631c1736b03 00:18:34.096 09:36:01 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:34.096 { 00:18:34.096 "name": "9f2521e6-c43c-40a5-9853-0631c1736b03", 00:18:34.096 "aliases": [ 00:18:34.096 "lvs/nvme0n1p0" 00:18:34.096 ], 00:18:34.096 "product_name": "Logical Volume", 00:18:34.096 "block_size": 4096, 00:18:34.096 "num_blocks": 26476544, 00:18:34.096 "uuid": "9f2521e6-c43c-40a5-9853-0631c1736b03", 00:18:34.096 "assigned_rate_limits": { 00:18:34.096 "rw_ios_per_sec": 0, 00:18:34.096 "rw_mbytes_per_sec": 0, 00:18:34.096 "r_mbytes_per_sec": 0, 00:18:34.096 "w_mbytes_per_sec": 0 00:18:34.096 }, 00:18:34.096 "claimed": false, 00:18:34.096 "zoned": false, 00:18:34.096 "supported_io_types": { 00:18:34.096 "read": true, 00:18:34.096 "write": true, 00:18:34.096 "unmap": true, 00:18:34.096 "flush": false, 00:18:34.096 "reset": true, 00:18:34.096 "nvme_admin": false, 00:18:34.096 "nvme_io": false, 00:18:34.096 "nvme_io_md": false, 00:18:34.096 "write_zeroes": true, 00:18:34.096 "zcopy": false, 00:18:34.096 "get_zone_info": false, 00:18:34.096 "zone_management": false, 00:18:34.096 "zone_append": false, 00:18:34.096 "compare": false, 00:18:34.096 "compare_and_write": false, 00:18:34.096 "abort": false, 00:18:34.096 "seek_hole": true, 00:18:34.096 "seek_data": true, 00:18:34.096 "copy": false, 00:18:34.096 "nvme_iov_md": false 00:18:34.096 }, 00:18:34.096 "driver_specific": { 00:18:34.096 "lvol": { 00:18:34.096 "lvol_store_uuid": "11c6ec6a-00d2-4cb7-879f-0fd44f724fae", 00:18:34.096 "base_bdev": "nvme0n1", 00:18:34.096 "thin_provision": true, 00:18:34.096 "num_allocated_clusters": 0, 00:18:34.096 "snapshot": false, 00:18:34.096 "clone": false, 00:18:34.096 "esnap_clone": false 00:18:34.096 } 00:18:34.096 } 00:18:34.096 } 00:18:34.096 ]' 00:18:34.096 09:36:01 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:34.096 09:36:01 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:18:34.096 09:36:01 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:34.096 09:36:01 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:34.096 09:36:01 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:34.096 09:36:01 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:18:34.096 09:36:01 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:18:34.096 09:36:01 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:34.355 09:36:01 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:18:34.355 09:36:01 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:18:34.355 09:36:01 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 9f2521e6-c43c-40a5-9853-0631c1736b03 00:18:34.355 09:36:01 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=9f2521e6-c43c-40a5-9853-0631c1736b03 00:18:34.355 09:36:01 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:34.355 09:36:01 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:18:34.355 09:36:01 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:18:34.355 09:36:01 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9f2521e6-c43c-40a5-9853-0631c1736b03 00:18:34.355 09:36:02 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:34.355 { 00:18:34.355 "name": "9f2521e6-c43c-40a5-9853-0631c1736b03", 00:18:34.355 "aliases": [ 00:18:34.355 "lvs/nvme0n1p0" 00:18:34.355 ], 00:18:34.356 "product_name": "Logical Volume", 00:18:34.356 "block_size": 4096, 00:18:34.356 "num_blocks": 26476544, 00:18:34.356 "uuid": "9f2521e6-c43c-40a5-9853-0631c1736b03", 00:18:34.356 "assigned_rate_limits": { 00:18:34.356 "rw_ios_per_sec": 0, 00:18:34.356 "rw_mbytes_per_sec": 0, 00:18:34.356 "r_mbytes_per_sec": 0, 00:18:34.356 "w_mbytes_per_sec": 0 00:18:34.356 }, 00:18:34.356 "claimed": false, 00:18:34.356 "zoned": false, 00:18:34.356 "supported_io_types": { 00:18:34.356 "read": true, 00:18:34.356 "write": true, 00:18:34.356 "unmap": true, 00:18:34.356 "flush": false, 00:18:34.356 "reset": true, 00:18:34.356 "nvme_admin": false, 00:18:34.356 "nvme_io": false, 00:18:34.356 "nvme_io_md": false, 00:18:34.356 "write_zeroes": true, 00:18:34.356 "zcopy": false, 00:18:34.356 "get_zone_info": false, 00:18:34.356 "zone_management": false, 00:18:34.356 "zone_append": false, 00:18:34.356 "compare": false, 00:18:34.356 "compare_and_write": false, 00:18:34.356 "abort": false, 00:18:34.356 "seek_hole": true, 00:18:34.356 "seek_data": true, 00:18:34.356 "copy": false, 00:18:34.356 "nvme_iov_md": false 00:18:34.356 }, 00:18:34.356 "driver_specific": { 00:18:34.356 "lvol": { 00:18:34.356 "lvol_store_uuid": "11c6ec6a-00d2-4cb7-879f-0fd44f724fae", 00:18:34.356 "base_bdev": "nvme0n1", 00:18:34.356 "thin_provision": true, 00:18:34.356 "num_allocated_clusters": 0, 00:18:34.356 "snapshot": false, 00:18:34.356 "clone": false, 00:18:34.356 "esnap_clone": false 00:18:34.356 } 00:18:34.356 } 00:18:34.356 } 00:18:34.356 ]' 00:18:34.356 09:36:02 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:34.615 09:36:02 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:18:34.615 09:36:02 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:34.615 09:36:02 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:34.615 09:36:02 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:34.615 09:36:02 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:18:34.615 09:36:02 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:18:34.615 09:36:02 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 9f2521e6-c43c-40a5-9853-0631c1736b03 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:18:34.615 [2024-11-29 09:36:02.295492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.615 [2024-11-29 09:36:02.295535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:34.615 [2024-11-29 09:36:02.295552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:34.615 [2024-11-29 09:36:02.295560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.615 [2024-11-29 09:36:02.297941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.615 [2024-11-29 09:36:02.298062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:34.615 [2024-11-29 09:36:02.298082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.354 ms 00:18:34.615 [2024-11-29 09:36:02.298102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.615 [2024-11-29 09:36:02.298438] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:34.615 [2024-11-29 09:36:02.298730] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:34.615 [2024-11-29 09:36:02.298757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.615 [2024-11-29 09:36:02.298768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:34.615 [2024-11-29 09:36:02.298779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.338 ms 00:18:34.615 [2024-11-29 09:36:02.298787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.615 [2024-11-29 09:36:02.298886] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID f68941d2-65a9-48fc-bf3a-b4b6b1215387 00:18:34.615 [2024-11-29 09:36:02.299961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.615 [2024-11-29 09:36:02.299993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:34.615 [2024-11-29 09:36:02.300004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:18:34.615 [2024-11-29 09:36:02.300015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.616 [2024-11-29 09:36:02.305336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.616 [2024-11-29 09:36:02.305460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:34.616 [2024-11-29 09:36:02.305475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.240 ms 00:18:34.616 [2024-11-29 09:36:02.305503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.616 [2024-11-29 09:36:02.305650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.616 [2024-11-29 09:36:02.305667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:34.616 [2024-11-29 09:36:02.305678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:18:34.616 [2024-11-29 09:36:02.305689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.616 [2024-11-29 09:36:02.305725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.616 [2024-11-29 09:36:02.305735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:34.616 [2024-11-29 09:36:02.305743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:34.616 [2024-11-29 09:36:02.305752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.616 [2024-11-29 09:36:02.305790] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:34.616 [2024-11-29 09:36:02.307211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.616 [2024-11-29 09:36:02.307239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:34.616 [2024-11-29 09:36:02.307250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.426 ms 00:18:34.616 [2024-11-29 09:36:02.307258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.616 [2024-11-29 09:36:02.307299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.616 [2024-11-29 09:36:02.307308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:34.616 [2024-11-29 09:36:02.307321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:34.616 [2024-11-29 09:36:02.307329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.616 [2024-11-29 09:36:02.307368] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:34.616 [2024-11-29 09:36:02.307500] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:34.616 [2024-11-29 09:36:02.307512] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:34.616 [2024-11-29 09:36:02.307522] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:34.616 [2024-11-29 09:36:02.307534] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:34.616 [2024-11-29 09:36:02.307543] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:34.616 [2024-11-29 09:36:02.307553] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:34.616 [2024-11-29 09:36:02.307560] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:34.616 [2024-11-29 09:36:02.307570] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:34.616 [2024-11-29 09:36:02.307576] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:34.616 [2024-11-29 09:36:02.307598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.616 [2024-11-29 09:36:02.307605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:34.616 [2024-11-29 09:36:02.307615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.219 ms 00:18:34.616 [2024-11-29 09:36:02.307622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.616 [2024-11-29 09:36:02.307722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.616 [2024-11-29 09:36:02.307729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:34.616 [2024-11-29 09:36:02.307739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:18:34.616 [2024-11-29 09:36:02.307745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.616 [2024-11-29 09:36:02.307872] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:34.616 [2024-11-29 09:36:02.307881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:34.616 [2024-11-29 09:36:02.307902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:34.616 [2024-11-29 09:36:02.307909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:34.616 [2024-11-29 09:36:02.307919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:34.616 [2024-11-29 09:36:02.307925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:34.616 [2024-11-29 09:36:02.307934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:34.616 [2024-11-29 09:36:02.307941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:34.616 [2024-11-29 09:36:02.307949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:34.616 [2024-11-29 09:36:02.307955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:34.616 [2024-11-29 09:36:02.307964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:34.616 [2024-11-29 09:36:02.307971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:34.616 [2024-11-29 09:36:02.307981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:34.616 [2024-11-29 09:36:02.307988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:34.616 [2024-11-29 09:36:02.307998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:34.616 [2024-11-29 09:36:02.308004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:34.616 [2024-11-29 09:36:02.308012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:34.616 [2024-11-29 09:36:02.308019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:34.616 [2024-11-29 09:36:02.308027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:34.616 [2024-11-29 09:36:02.308033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:34.616 [2024-11-29 09:36:02.308041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:34.616 [2024-11-29 09:36:02.308048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:34.616 [2024-11-29 09:36:02.308055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:34.616 [2024-11-29 09:36:02.308062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:34.616 [2024-11-29 09:36:02.308070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:34.616 [2024-11-29 09:36:02.308076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:34.616 [2024-11-29 09:36:02.308085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:34.616 [2024-11-29 09:36:02.308092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:34.616 [2024-11-29 09:36:02.308102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:34.616 [2024-11-29 09:36:02.308108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:34.616 [2024-11-29 09:36:02.308116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:34.616 [2024-11-29 09:36:02.308122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:34.616 [2024-11-29 09:36:02.308130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:34.616 [2024-11-29 09:36:02.308137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:34.616 [2024-11-29 09:36:02.308145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:34.616 [2024-11-29 09:36:02.308151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:34.616 [2024-11-29 09:36:02.308159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:34.616 [2024-11-29 09:36:02.308166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:34.616 [2024-11-29 09:36:02.308174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:34.616 [2024-11-29 09:36:02.308180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:34.616 [2024-11-29 09:36:02.308188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:34.616 [2024-11-29 09:36:02.308194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:34.616 [2024-11-29 09:36:02.308203] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:34.616 [2024-11-29 09:36:02.308210] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:34.616 [2024-11-29 09:36:02.308220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:34.616 [2024-11-29 09:36:02.308227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:34.616 [2024-11-29 09:36:02.308236] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:34.616 [2024-11-29 09:36:02.308243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:34.616 [2024-11-29 09:36:02.308251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:34.616 [2024-11-29 09:36:02.308258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:34.616 [2024-11-29 09:36:02.308266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:34.616 [2024-11-29 09:36:02.308272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:34.616 [2024-11-29 09:36:02.308280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:34.616 [2024-11-29 09:36:02.308290] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:34.616 [2024-11-29 09:36:02.308301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:34.616 [2024-11-29 09:36:02.308311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:34.616 [2024-11-29 09:36:02.308319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:34.616 [2024-11-29 09:36:02.308326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:34.616 [2024-11-29 09:36:02.308336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:34.616 [2024-11-29 09:36:02.308343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:34.616 [2024-11-29 09:36:02.308353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:34.616 [2024-11-29 09:36:02.308360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:34.616 [2024-11-29 09:36:02.308368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:34.616 [2024-11-29 09:36:02.308375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:34.617 [2024-11-29 09:36:02.308385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:34.617 [2024-11-29 09:36:02.308392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:34.617 [2024-11-29 09:36:02.308400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:34.617 [2024-11-29 09:36:02.308407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:34.617 [2024-11-29 09:36:02.308416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:34.617 [2024-11-29 09:36:02.308423] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:34.617 [2024-11-29 09:36:02.308432] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:34.617 [2024-11-29 09:36:02.308440] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:34.617 [2024-11-29 09:36:02.308449] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:34.617 [2024-11-29 09:36:02.308456] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:34.617 [2024-11-29 09:36:02.308465] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:34.617 [2024-11-29 09:36:02.308472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.617 [2024-11-29 09:36:02.308483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:34.617 [2024-11-29 09:36:02.308490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.668 ms 00:18:34.617 [2024-11-29 09:36:02.308504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.617 [2024-11-29 09:36:02.308846] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:34.617 [2024-11-29 09:36:02.308901] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:37.150 [2024-11-29 09:36:04.630452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.150 [2024-11-29 09:36:04.630684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:37.150 [2024-11-29 09:36:04.630771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2321.595 ms 00:18:37.150 [2024-11-29 09:36:04.630799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.150 [2024-11-29 09:36:04.639443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.150 [2024-11-29 09:36:04.639612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:37.150 [2024-11-29 09:36:04.639686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.542 ms 00:18:37.150 [2024-11-29 09:36:04.639714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.150 [2024-11-29 09:36:04.639907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.150 [2024-11-29 09:36:04.639994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:37.150 [2024-11-29 09:36:04.640052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:18:37.150 [2024-11-29 09:36:04.640077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.150 [2024-11-29 09:36:04.658244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.150 [2024-11-29 09:36:04.658459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:37.150 [2024-11-29 09:36:04.658555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.118 ms 00:18:37.150 [2024-11-29 09:36:04.658709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.150 [2024-11-29 09:36:04.658860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.150 [2024-11-29 09:36:04.658918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:37.150 [2024-11-29 09:36:04.659031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:37.150 [2024-11-29 09:36:04.659073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.150 [2024-11-29 09:36:04.659495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.150 [2024-11-29 09:36:04.659657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:37.150 [2024-11-29 09:36:04.659678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.347 ms 00:18:37.150 [2024-11-29 09:36:04.659710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.150 [2024-11-29 09:36:04.659897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.150 [2024-11-29 09:36:04.659917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:37.150 [2024-11-29 09:36:04.659929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:18:37.150 [2024-11-29 09:36:04.659943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.150 [2024-11-29 09:36:04.666669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.150 [2024-11-29 09:36:04.666778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:37.150 [2024-11-29 09:36:04.666792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.686 ms 00:18:37.150 [2024-11-29 09:36:04.666802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.150 [2024-11-29 09:36:04.675090] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:37.150 [2024-11-29 09:36:04.689690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.150 [2024-11-29 09:36:04.689800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:37.150 [2024-11-29 09:36:04.689818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.798 ms 00:18:37.150 [2024-11-29 09:36:04.689826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.150 [2024-11-29 09:36:04.743799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.150 [2024-11-29 09:36:04.743841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:37.150 [2024-11-29 09:36:04.743857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.902 ms 00:18:37.150 [2024-11-29 09:36:04.743869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.150 [2024-11-29 09:36:04.744057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.150 [2024-11-29 09:36:04.744068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:37.150 [2024-11-29 09:36:04.744078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:18:37.150 [2024-11-29 09:36:04.744085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.150 [2024-11-29 09:36:04.747412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.150 [2024-11-29 09:36:04.747443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:37.150 [2024-11-29 09:36:04.747455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.297 ms 00:18:37.150 [2024-11-29 09:36:04.747465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.150 [2024-11-29 09:36:04.750121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.150 [2024-11-29 09:36:04.750151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:37.150 [2024-11-29 09:36:04.750162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.604 ms 00:18:37.150 [2024-11-29 09:36:04.750169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.150 [2024-11-29 09:36:04.750487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.150 [2024-11-29 09:36:04.750503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:37.150 [2024-11-29 09:36:04.750515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:18:37.150 [2024-11-29 09:36:04.750522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.150 [2024-11-29 09:36:04.776882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.150 [2024-11-29 09:36:04.777004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:37.150 [2024-11-29 09:36:04.777027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.312 ms 00:18:37.150 [2024-11-29 09:36:04.777037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.150 [2024-11-29 09:36:04.781057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.150 [2024-11-29 09:36:04.781098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:37.150 [2024-11-29 09:36:04.781111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.948 ms 00:18:37.150 [2024-11-29 09:36:04.781130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.150 [2024-11-29 09:36:04.784063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.150 [2024-11-29 09:36:04.784173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:37.150 [2024-11-29 09:36:04.784190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.866 ms 00:18:37.150 [2024-11-29 09:36:04.784197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.150 [2024-11-29 09:36:04.787539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.150 [2024-11-29 09:36:04.787681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:37.150 [2024-11-29 09:36:04.787700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.286 ms 00:18:37.150 [2024-11-29 09:36:04.787707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.150 [2024-11-29 09:36:04.787755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.150 [2024-11-29 09:36:04.787764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:37.150 [2024-11-29 09:36:04.787775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:37.150 [2024-11-29 09:36:04.787782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.150 [2024-11-29 09:36:04.787856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.150 [2024-11-29 09:36:04.787865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:37.150 [2024-11-29 09:36:04.787874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:18:37.150 [2024-11-29 09:36:04.787881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.150 [2024-11-29 09:36:04.788753] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:37.150 [2024-11-29 09:36:04.789735] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2492.955 ms, result 0 00:18:37.150 [2024-11-29 09:36:04.790322] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:37.150 { 00:18:37.150 "name": "ftl0", 00:18:37.150 "uuid": "f68941d2-65a9-48fc-bf3a-b4b6b1215387" 00:18:37.150 } 00:18:37.150 09:36:04 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:18:37.150 09:36:04 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:18:37.150 09:36:04 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:37.150 09:36:04 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:18:37.150 09:36:04 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:37.150 09:36:04 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:37.150 09:36:04 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:37.409 09:36:05 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:18:37.666 [ 00:18:37.666 { 00:18:37.666 "name": "ftl0", 00:18:37.666 "aliases": [ 00:18:37.666 "f68941d2-65a9-48fc-bf3a-b4b6b1215387" 00:18:37.666 ], 00:18:37.666 "product_name": "FTL disk", 00:18:37.666 "block_size": 4096, 00:18:37.666 "num_blocks": 23592960, 00:18:37.666 "uuid": "f68941d2-65a9-48fc-bf3a-b4b6b1215387", 00:18:37.666 "assigned_rate_limits": { 00:18:37.666 "rw_ios_per_sec": 0, 00:18:37.666 "rw_mbytes_per_sec": 0, 00:18:37.666 "r_mbytes_per_sec": 0, 00:18:37.666 "w_mbytes_per_sec": 0 00:18:37.666 }, 00:18:37.666 "claimed": false, 00:18:37.666 "zoned": false, 00:18:37.666 "supported_io_types": { 00:18:37.666 "read": true, 00:18:37.666 "write": true, 00:18:37.666 "unmap": true, 00:18:37.666 "flush": true, 00:18:37.666 "reset": false, 00:18:37.666 "nvme_admin": false, 00:18:37.666 "nvme_io": false, 00:18:37.666 "nvme_io_md": false, 00:18:37.666 "write_zeroes": true, 00:18:37.666 "zcopy": false, 00:18:37.666 "get_zone_info": false, 00:18:37.666 "zone_management": false, 00:18:37.666 "zone_append": false, 00:18:37.666 "compare": false, 00:18:37.666 "compare_and_write": false, 00:18:37.666 "abort": false, 00:18:37.666 "seek_hole": false, 00:18:37.666 "seek_data": false, 00:18:37.666 "copy": false, 00:18:37.666 "nvme_iov_md": false 00:18:37.666 }, 00:18:37.666 "driver_specific": { 00:18:37.666 "ftl": { 00:18:37.666 "base_bdev": "9f2521e6-c43c-40a5-9853-0631c1736b03", 00:18:37.666 "cache": "nvc0n1p0" 00:18:37.666 } 00:18:37.666 } 00:18:37.666 } 00:18:37.666 ] 00:18:37.666 09:36:05 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:18:37.666 09:36:05 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:18:37.666 09:36:05 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:37.924 09:36:05 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:18:37.924 09:36:05 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:18:37.924 09:36:05 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:18:37.924 { 00:18:37.924 "name": "ftl0", 00:18:37.924 "aliases": [ 00:18:37.924 "f68941d2-65a9-48fc-bf3a-b4b6b1215387" 00:18:37.924 ], 00:18:37.924 "product_name": "FTL disk", 00:18:37.924 "block_size": 4096, 00:18:37.924 "num_blocks": 23592960, 00:18:37.924 "uuid": "f68941d2-65a9-48fc-bf3a-b4b6b1215387", 00:18:37.924 "assigned_rate_limits": { 00:18:37.924 "rw_ios_per_sec": 0, 00:18:37.924 "rw_mbytes_per_sec": 0, 00:18:37.924 "r_mbytes_per_sec": 0, 00:18:37.924 "w_mbytes_per_sec": 0 00:18:37.924 }, 00:18:37.924 "claimed": false, 00:18:37.924 "zoned": false, 00:18:37.924 "supported_io_types": { 00:18:37.924 "read": true, 00:18:37.924 "write": true, 00:18:37.924 "unmap": true, 00:18:37.924 "flush": true, 00:18:37.924 "reset": false, 00:18:37.924 "nvme_admin": false, 00:18:37.924 "nvme_io": false, 00:18:37.924 "nvme_io_md": false, 00:18:37.924 "write_zeroes": true, 00:18:37.924 "zcopy": false, 00:18:37.924 "get_zone_info": false, 00:18:37.924 "zone_management": false, 00:18:37.924 "zone_append": false, 00:18:37.924 "compare": false, 00:18:37.924 "compare_and_write": false, 00:18:37.924 "abort": false, 00:18:37.924 "seek_hole": false, 00:18:37.924 "seek_data": false, 00:18:37.924 "copy": false, 00:18:37.924 "nvme_iov_md": false 00:18:37.924 }, 00:18:37.924 "driver_specific": { 00:18:37.924 "ftl": { 00:18:37.924 "base_bdev": "9f2521e6-c43c-40a5-9853-0631c1736b03", 00:18:37.924 "cache": "nvc0n1p0" 00:18:37.924 } 00:18:37.924 } 00:18:37.924 } 00:18:37.924 ]' 00:18:37.924 09:36:05 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:18:37.924 09:36:05 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:18:37.924 09:36:05 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:38.183 [2024-11-29 09:36:05.815811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.183 [2024-11-29 09:36:05.815857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:38.183 [2024-11-29 09:36:05.815870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:38.183 [2024-11-29 09:36:05.815881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.183 [2024-11-29 09:36:05.815916] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:38.183 [2024-11-29 09:36:05.816363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.183 [2024-11-29 09:36:05.816377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:38.183 [2024-11-29 09:36:05.816388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.430 ms 00:18:38.183 [2024-11-29 09:36:05.816397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.183 [2024-11-29 09:36:05.816998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.183 [2024-11-29 09:36:05.817009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:38.183 [2024-11-29 09:36:05.817023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.568 ms 00:18:38.183 [2024-11-29 09:36:05.817030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.183 [2024-11-29 09:36:05.820686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.183 [2024-11-29 09:36:05.820706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:38.183 [2024-11-29 09:36:05.820717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.624 ms 00:18:38.183 [2024-11-29 09:36:05.820725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.183 [2024-11-29 09:36:05.827731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.183 [2024-11-29 09:36:05.827769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:38.183 [2024-11-29 09:36:05.827782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.959 ms 00:18:38.183 [2024-11-29 09:36:05.827789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.183 [2024-11-29 09:36:05.829289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.183 [2024-11-29 09:36:05.829323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:38.183 [2024-11-29 09:36:05.829334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.410 ms 00:18:38.183 [2024-11-29 09:36:05.829341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.183 [2024-11-29 09:36:05.833118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.183 [2024-11-29 09:36:05.833151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:38.183 [2024-11-29 09:36:05.833163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.724 ms 00:18:38.183 [2024-11-29 09:36:05.833173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.183 [2024-11-29 09:36:05.833369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.183 [2024-11-29 09:36:05.833382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:38.183 [2024-11-29 09:36:05.833392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:18:38.183 [2024-11-29 09:36:05.833399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.183 [2024-11-29 09:36:05.834948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.183 [2024-11-29 09:36:05.835067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:38.183 [2024-11-29 09:36:05.835087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.501 ms 00:18:38.183 [2024-11-29 09:36:05.835095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.183 [2024-11-29 09:36:05.836442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.183 [2024-11-29 09:36:05.836473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:38.183 [2024-11-29 09:36:05.836484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.298 ms 00:18:38.183 [2024-11-29 09:36:05.836490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.183 [2024-11-29 09:36:05.837633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.183 [2024-11-29 09:36:05.837662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:38.183 [2024-11-29 09:36:05.837673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.083 ms 00:18:38.183 [2024-11-29 09:36:05.837679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.183 [2024-11-29 09:36:05.838557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.183 [2024-11-29 09:36:05.838671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:38.183 [2024-11-29 09:36:05.838688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.783 ms 00:18:38.183 [2024-11-29 09:36:05.838695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.183 [2024-11-29 09:36:05.838738] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:38.183 [2024-11-29 09:36:05.838751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:38.183 [2024-11-29 09:36:05.838764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:38.183 [2024-11-29 09:36:05.838771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:38.183 [2024-11-29 09:36:05.838781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:38.183 [2024-11-29 09:36:05.838788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:38.183 [2024-11-29 09:36:05.838798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:38.183 [2024-11-29 09:36:05.838805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:38.183 [2024-11-29 09:36:05.838815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:38.183 [2024-11-29 09:36:05.838822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:38.183 [2024-11-29 09:36:05.838832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:38.183 [2024-11-29 09:36:05.838839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:38.183 [2024-11-29 09:36:05.838848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:38.183 [2024-11-29 09:36:05.838855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:38.183 [2024-11-29 09:36:05.838864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:38.183 [2024-11-29 09:36:05.838871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:38.183 [2024-11-29 09:36:05.838880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:38.183 [2024-11-29 09:36:05.838888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:38.183 [2024-11-29 09:36:05.838899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:38.183 [2024-11-29 09:36:05.838906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:38.183 [2024-11-29 09:36:05.838915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:38.183 [2024-11-29 09:36:05.838922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:38.183 [2024-11-29 09:36:05.838931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:38.183 [2024-11-29 09:36:05.838938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:38.183 [2024-11-29 09:36:05.838947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:38.183 [2024-11-29 09:36:05.838954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:38.183 [2024-11-29 09:36:05.838975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.838983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.838993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:38.184 [2024-11-29 09:36:05.839583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:38.185 [2024-11-29 09:36:05.839610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:38.185 [2024-11-29 09:36:05.839626] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:38.185 [2024-11-29 09:36:05.839635] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f68941d2-65a9-48fc-bf3a-b4b6b1215387 00:18:38.185 [2024-11-29 09:36:05.839643] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:38.185 [2024-11-29 09:36:05.839653] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:38.185 [2024-11-29 09:36:05.839670] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:38.185 [2024-11-29 09:36:05.839679] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:38.185 [2024-11-29 09:36:05.839686] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:38.185 [2024-11-29 09:36:05.839695] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:38.185 [2024-11-29 09:36:05.839702] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:38.185 [2024-11-29 09:36:05.839712] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:38.185 [2024-11-29 09:36:05.839718] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:38.185 [2024-11-29 09:36:05.839727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.185 [2024-11-29 09:36:05.839734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:38.185 [2024-11-29 09:36:05.839746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.990 ms 00:18:38.185 [2024-11-29 09:36:05.839753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.185 [2024-11-29 09:36:05.841300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.185 [2024-11-29 09:36:05.841321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:38.185 [2024-11-29 09:36:05.841331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.514 ms 00:18:38.185 [2024-11-29 09:36:05.841338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.185 [2024-11-29 09:36:05.841455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.185 [2024-11-29 09:36:05.841465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:38.185 [2024-11-29 09:36:05.841476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:18:38.185 [2024-11-29 09:36:05.841502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.185 [2024-11-29 09:36:05.847037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.185 [2024-11-29 09:36:05.847150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:38.185 [2024-11-29 09:36:05.847208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.185 [2024-11-29 09:36:05.847267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.185 [2024-11-29 09:36:05.847362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.185 [2024-11-29 09:36:05.847410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:38.185 [2024-11-29 09:36:05.847467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.185 [2024-11-29 09:36:05.847489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.185 [2024-11-29 09:36:05.847574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.185 [2024-11-29 09:36:05.847634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:38.185 [2024-11-29 09:36:05.847668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.185 [2024-11-29 09:36:05.847721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.185 [2024-11-29 09:36:05.847775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.185 [2024-11-29 09:36:05.847835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:38.185 [2024-11-29 09:36:05.847860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.185 [2024-11-29 09:36:05.847879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.185 [2024-11-29 09:36:05.857307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.185 [2024-11-29 09:36:05.857448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:38.185 [2024-11-29 09:36:05.857571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.185 [2024-11-29 09:36:05.857666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.185 [2024-11-29 09:36:05.865668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.185 [2024-11-29 09:36:05.865802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:38.185 [2024-11-29 09:36:05.865859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.185 [2024-11-29 09:36:05.865881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.185 [2024-11-29 09:36:05.865962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.185 [2024-11-29 09:36:05.865988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:38.185 [2024-11-29 09:36:05.866039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.185 [2024-11-29 09:36:05.866060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.185 [2024-11-29 09:36:05.866128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.185 [2024-11-29 09:36:05.866151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:38.185 [2024-11-29 09:36:05.866187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.185 [2024-11-29 09:36:05.866249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.185 [2024-11-29 09:36:05.866351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.185 [2024-11-29 09:36:05.866379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:38.185 [2024-11-29 09:36:05.866428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.185 [2024-11-29 09:36:05.866450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.185 [2024-11-29 09:36:05.866522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.185 [2024-11-29 09:36:05.866546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:38.185 [2024-11-29 09:36:05.866570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.185 [2024-11-29 09:36:05.866633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.185 [2024-11-29 09:36:05.866702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.185 [2024-11-29 09:36:05.866725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:38.185 [2024-11-29 09:36:05.866775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.185 [2024-11-29 09:36:05.866798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.185 [2024-11-29 09:36:05.866863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.185 [2024-11-29 09:36:05.866873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:38.185 [2024-11-29 09:36:05.866882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.185 [2024-11-29 09:36:05.866890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.185 [2024-11-29 09:36:05.867092] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 51.247 ms, result 0 00:18:38.185 true 00:18:38.185 09:36:05 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 89152 00:18:38.185 09:36:05 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 89152 ']' 00:18:38.185 09:36:05 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 89152 00:18:38.185 09:36:05 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:18:38.185 09:36:05 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:38.185 09:36:05 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89152 00:18:38.444 09:36:05 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:38.444 09:36:05 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:38.444 09:36:05 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89152' 00:18:38.444 killing process with pid 89152 00:18:38.444 09:36:05 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 89152 00:18:38.444 09:36:05 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 89152 00:18:43.708 09:36:10 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:18:43.968 65536+0 records in 00:18:43.968 65536+0 records out 00:18:43.968 268435456 bytes (268 MB, 256 MiB) copied, 0.805178 s, 333 MB/s 00:18:43.968 09:36:11 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:43.968 [2024-11-29 09:36:11.625405] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:18:43.968 [2024-11-29 09:36:11.625653] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89306 ] 00:18:44.229 [2024-11-29 09:36:11.757804] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:44.229 [2024-11-29 09:36:11.786181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.229 [2024-11-29 09:36:11.806273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:44.229 [2024-11-29 09:36:11.900365] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:44.229 [2024-11-29 09:36:11.900442] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:44.491 [2024-11-29 09:36:12.061164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.491 [2024-11-29 09:36:12.061225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:44.491 [2024-11-29 09:36:12.061241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:44.491 [2024-11-29 09:36:12.061250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.491 [2024-11-29 09:36:12.063928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.491 [2024-11-29 09:36:12.063982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:44.491 [2024-11-29 09:36:12.063993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.656 ms 00:18:44.492 [2024-11-29 09:36:12.064002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.492 [2024-11-29 09:36:12.064119] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:44.492 [2024-11-29 09:36:12.064384] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:44.492 [2024-11-29 09:36:12.064405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.492 [2024-11-29 09:36:12.064413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:44.492 [2024-11-29 09:36:12.064423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:18:44.492 [2024-11-29 09:36:12.064431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.492 [2024-11-29 09:36:12.066414] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:44.492 [2024-11-29 09:36:12.070333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.492 [2024-11-29 09:36:12.070392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:44.492 [2024-11-29 09:36:12.070404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.922 ms 00:18:44.492 [2024-11-29 09:36:12.070412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.492 [2024-11-29 09:36:12.070499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.492 [2024-11-29 09:36:12.070512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:44.492 [2024-11-29 09:36:12.070521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:18:44.492 [2024-11-29 09:36:12.070529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.492 [2024-11-29 09:36:12.078885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.492 [2024-11-29 09:36:12.078931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:44.492 [2024-11-29 09:36:12.078941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.308 ms 00:18:44.492 [2024-11-29 09:36:12.078952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.492 [2024-11-29 09:36:12.079094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.492 [2024-11-29 09:36:12.079106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:44.492 [2024-11-29 09:36:12.079115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:18:44.492 [2024-11-29 09:36:12.079126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.492 [2024-11-29 09:36:12.079155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.492 [2024-11-29 09:36:12.079163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:44.492 [2024-11-29 09:36:12.079172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:44.492 [2024-11-29 09:36:12.079180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.492 [2024-11-29 09:36:12.079202] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:44.492 [2024-11-29 09:36:12.081353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.492 [2024-11-29 09:36:12.081396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:44.492 [2024-11-29 09:36:12.081414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.157 ms 00:18:44.492 [2024-11-29 09:36:12.081424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.492 [2024-11-29 09:36:12.081469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.492 [2024-11-29 09:36:12.081478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:44.492 [2024-11-29 09:36:12.081487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:18:44.492 [2024-11-29 09:36:12.081510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.492 [2024-11-29 09:36:12.081529] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:44.492 [2024-11-29 09:36:12.081550] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:44.492 [2024-11-29 09:36:12.081626] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:44.492 [2024-11-29 09:36:12.081646] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:44.492 [2024-11-29 09:36:12.081753] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:44.492 [2024-11-29 09:36:12.081765] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:44.492 [2024-11-29 09:36:12.081776] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:44.492 [2024-11-29 09:36:12.081787] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:44.492 [2024-11-29 09:36:12.081796] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:44.492 [2024-11-29 09:36:12.081805] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:44.492 [2024-11-29 09:36:12.081812] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:44.492 [2024-11-29 09:36:12.081824] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:44.492 [2024-11-29 09:36:12.081838] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:44.492 [2024-11-29 09:36:12.081846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.492 [2024-11-29 09:36:12.081853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:44.492 [2024-11-29 09:36:12.081861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:18:44.492 [2024-11-29 09:36:12.081868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.492 [2024-11-29 09:36:12.081962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.492 [2024-11-29 09:36:12.081971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:44.492 [2024-11-29 09:36:12.081979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:44.492 [2024-11-29 09:36:12.081986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.492 [2024-11-29 09:36:12.082087] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:44.492 [2024-11-29 09:36:12.082099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:44.492 [2024-11-29 09:36:12.082109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:44.492 [2024-11-29 09:36:12.082118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:44.492 [2024-11-29 09:36:12.082134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:44.492 [2024-11-29 09:36:12.082143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:44.492 [2024-11-29 09:36:12.082153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:44.492 [2024-11-29 09:36:12.082161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:44.492 [2024-11-29 09:36:12.082169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:44.492 [2024-11-29 09:36:12.082177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:44.492 [2024-11-29 09:36:12.082185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:44.492 [2024-11-29 09:36:12.082193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:44.492 [2024-11-29 09:36:12.082201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:44.492 [2024-11-29 09:36:12.082209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:44.492 [2024-11-29 09:36:12.082217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:44.492 [2024-11-29 09:36:12.082225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:44.492 [2024-11-29 09:36:12.082234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:44.492 [2024-11-29 09:36:12.082246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:44.492 [2024-11-29 09:36:12.082254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:44.492 [2024-11-29 09:36:12.082262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:44.492 [2024-11-29 09:36:12.082271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:44.492 [2024-11-29 09:36:12.082279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:44.492 [2024-11-29 09:36:12.082295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:44.492 [2024-11-29 09:36:12.082303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:44.493 [2024-11-29 09:36:12.082311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:44.493 [2024-11-29 09:36:12.082319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:44.493 [2024-11-29 09:36:12.082326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:44.493 [2024-11-29 09:36:12.082333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:44.493 [2024-11-29 09:36:12.082342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:44.493 [2024-11-29 09:36:12.082350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:44.493 [2024-11-29 09:36:12.082358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:44.493 [2024-11-29 09:36:12.082366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:44.493 [2024-11-29 09:36:12.082374] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:44.493 [2024-11-29 09:36:12.082382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:44.493 [2024-11-29 09:36:12.082390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:44.493 [2024-11-29 09:36:12.082398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:44.493 [2024-11-29 09:36:12.082406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:44.493 [2024-11-29 09:36:12.082413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:44.493 [2024-11-29 09:36:12.082422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:44.493 [2024-11-29 09:36:12.082428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:44.493 [2024-11-29 09:36:12.082435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:44.493 [2024-11-29 09:36:12.082442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:44.493 [2024-11-29 09:36:12.082449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:44.493 [2024-11-29 09:36:12.082455] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:44.493 [2024-11-29 09:36:12.082463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:44.493 [2024-11-29 09:36:12.082471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:44.493 [2024-11-29 09:36:12.082478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:44.493 [2024-11-29 09:36:12.082490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:44.493 [2024-11-29 09:36:12.082497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:44.493 [2024-11-29 09:36:12.082505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:44.493 [2024-11-29 09:36:12.082513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:44.493 [2024-11-29 09:36:12.082520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:44.493 [2024-11-29 09:36:12.082527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:44.493 [2024-11-29 09:36:12.082536] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:44.493 [2024-11-29 09:36:12.082552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:44.493 [2024-11-29 09:36:12.082563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:44.493 [2024-11-29 09:36:12.082571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:44.493 [2024-11-29 09:36:12.082579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:44.493 [2024-11-29 09:36:12.082602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:44.493 [2024-11-29 09:36:12.082611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:44.493 [2024-11-29 09:36:12.082618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:44.493 [2024-11-29 09:36:12.082625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:44.493 [2024-11-29 09:36:12.082633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:44.493 [2024-11-29 09:36:12.082640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:44.493 [2024-11-29 09:36:12.082648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:44.493 [2024-11-29 09:36:12.082655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:44.493 [2024-11-29 09:36:12.082663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:44.493 [2024-11-29 09:36:12.082670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:44.493 [2024-11-29 09:36:12.082678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:44.493 [2024-11-29 09:36:12.082685] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:44.493 [2024-11-29 09:36:12.082696] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:44.493 [2024-11-29 09:36:12.082711] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:44.493 [2024-11-29 09:36:12.082719] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:44.493 [2024-11-29 09:36:12.082727] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:44.493 [2024-11-29 09:36:12.082735] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:44.493 [2024-11-29 09:36:12.082743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.493 [2024-11-29 09:36:12.082751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:44.493 [2024-11-29 09:36:12.082758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.724 ms 00:18:44.493 [2024-11-29 09:36:12.082766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.493 [2024-11-29 09:36:12.096704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.493 [2024-11-29 09:36:12.096754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:44.493 [2024-11-29 09:36:12.096774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.884 ms 00:18:44.493 [2024-11-29 09:36:12.096787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.493 [2024-11-29 09:36:12.096931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.493 [2024-11-29 09:36:12.096954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:44.493 [2024-11-29 09:36:12.096963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:18:44.493 [2024-11-29 09:36:12.096971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.493 [2024-11-29 09:36:12.117774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.493 [2024-11-29 09:36:12.117948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:44.493 [2024-11-29 09:36:12.117977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.779 ms 00:18:44.493 [2024-11-29 09:36:12.117993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.493 [2024-11-29 09:36:12.118084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.493 [2024-11-29 09:36:12.118099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:44.493 [2024-11-29 09:36:12.118109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:44.493 [2024-11-29 09:36:12.118124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.493 [2024-11-29 09:36:12.118487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.493 [2024-11-29 09:36:12.118506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:44.493 [2024-11-29 09:36:12.118519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.337 ms 00:18:44.494 [2024-11-29 09:36:12.118536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.494 [2024-11-29 09:36:12.118726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.494 [2024-11-29 09:36:12.118739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:44.494 [2024-11-29 09:36:12.118755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.157 ms 00:18:44.494 [2024-11-29 09:36:12.118765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.494 [2024-11-29 09:36:12.124579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.494 [2024-11-29 09:36:12.124633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:44.494 [2024-11-29 09:36:12.124642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.787 ms 00:18:44.494 [2024-11-29 09:36:12.124650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.494 [2024-11-29 09:36:12.127336] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:44.494 [2024-11-29 09:36:12.127372] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:44.494 [2024-11-29 09:36:12.127383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.494 [2024-11-29 09:36:12.127391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:44.494 [2024-11-29 09:36:12.127399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.648 ms 00:18:44.494 [2024-11-29 09:36:12.127406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.494 [2024-11-29 09:36:12.141906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.494 [2024-11-29 09:36:12.141938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:44.494 [2024-11-29 09:36:12.141948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.457 ms 00:18:44.494 [2024-11-29 09:36:12.141956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.494 [2024-11-29 09:36:12.144167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.494 [2024-11-29 09:36:12.144298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:44.494 [2024-11-29 09:36:12.144316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.141 ms 00:18:44.494 [2024-11-29 09:36:12.144323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.494 [2024-11-29 09:36:12.146327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.494 [2024-11-29 09:36:12.146361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:44.494 [2024-11-29 09:36:12.146370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.965 ms 00:18:44.494 [2024-11-29 09:36:12.146377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.494 [2024-11-29 09:36:12.146724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.494 [2024-11-29 09:36:12.146739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:44.494 [2024-11-29 09:36:12.146748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:18:44.494 [2024-11-29 09:36:12.146755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.494 [2024-11-29 09:36:12.162224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.494 [2024-11-29 09:36:12.162275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:44.494 [2024-11-29 09:36:12.162287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.448 ms 00:18:44.494 [2024-11-29 09:36:12.162294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.494 [2024-11-29 09:36:12.169724] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:44.494 [2024-11-29 09:36:12.183990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.494 [2024-11-29 09:36:12.184024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:44.494 [2024-11-29 09:36:12.184042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.638 ms 00:18:44.494 [2024-11-29 09:36:12.184050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.494 [2024-11-29 09:36:12.184124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.494 [2024-11-29 09:36:12.184134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:44.494 [2024-11-29 09:36:12.184146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:44.494 [2024-11-29 09:36:12.184155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.494 [2024-11-29 09:36:12.184198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.494 [2024-11-29 09:36:12.184210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:44.494 [2024-11-29 09:36:12.184218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:18:44.494 [2024-11-29 09:36:12.184225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.494 [2024-11-29 09:36:12.184254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.494 [2024-11-29 09:36:12.184263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:44.494 [2024-11-29 09:36:12.184271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:44.494 [2024-11-29 09:36:12.184280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.494 [2024-11-29 09:36:12.184313] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:44.494 [2024-11-29 09:36:12.184323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.494 [2024-11-29 09:36:12.184330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:44.494 [2024-11-29 09:36:12.184338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:44.494 [2024-11-29 09:36:12.184345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.494 [2024-11-29 09:36:12.188563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.494 [2024-11-29 09:36:12.188686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:44.494 [2024-11-29 09:36:12.188702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.196 ms 00:18:44.494 [2024-11-29 09:36:12.188710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.494 [2024-11-29 09:36:12.188797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.494 [2024-11-29 09:36:12.188807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:44.494 [2024-11-29 09:36:12.188815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:18:44.494 [2024-11-29 09:36:12.188822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.494 [2024-11-29 09:36:12.189627] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:44.494 [2024-11-29 09:36:12.190666] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 128.169 ms, result 0 00:18:44.494 [2024-11-29 09:36:12.191653] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:44.494 [2024-11-29 09:36:12.201196] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:45.878  [2024-11-29T09:36:14.547Z] Copying: 18/256 [MB] (18 MBps) [2024-11-29T09:36:15.491Z] Copying: 40/256 [MB] (22 MBps) [2024-11-29T09:36:16.435Z] Copying: 59/256 [MB] (18 MBps) [2024-11-29T09:36:17.379Z] Copying: 76/256 [MB] (17 MBps) [2024-11-29T09:36:18.324Z] Copying: 96/256 [MB] (19 MBps) [2024-11-29T09:36:19.267Z] Copying: 113/256 [MB] (17 MBps) [2024-11-29T09:36:20.208Z] Copying: 131/256 [MB] (17 MBps) [2024-11-29T09:36:21.591Z] Copying: 159/256 [MB] (27 MBps) [2024-11-29T09:36:22.533Z] Copying: 190/256 [MB] (31 MBps) [2024-11-29T09:36:23.476Z] Copying: 206/256 [MB] (16 MBps) [2024-11-29T09:36:24.422Z] Copying: 217/256 [MB] (10 MBps) [2024-11-29T09:36:25.366Z] Copying: 230/256 [MB] (13 MBps) [2024-11-29T09:36:26.312Z] Copying: 241/256 [MB] (10 MBps) [2024-11-29T09:36:26.886Z] Copying: 251/256 [MB] (10 MBps) [2024-11-29T09:36:26.886Z] Copying: 256/256 [MB] (average 17 MBps)[2024-11-29 09:36:26.612130] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:59.160 [2024-11-29 09:36:26.613421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.160 [2024-11-29 09:36:26.613463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:59.160 [2024-11-29 09:36:26.613476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:59.160 [2024-11-29 09:36:26.613483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.160 [2024-11-29 09:36:26.613504] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:59.160 [2024-11-29 09:36:26.614004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.160 [2024-11-29 09:36:26.614034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:59.160 [2024-11-29 09:36:26.614044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.467 ms 00:18:59.160 [2024-11-29 09:36:26.614052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.160 [2024-11-29 09:36:26.616709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.160 [2024-11-29 09:36:26.616743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:59.160 [2024-11-29 09:36:26.616757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.635 ms 00:18:59.160 [2024-11-29 09:36:26.616765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.160 [2024-11-29 09:36:26.624132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.160 [2024-11-29 09:36:26.624166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:59.160 [2024-11-29 09:36:26.624183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.350 ms 00:18:59.160 [2024-11-29 09:36:26.624193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.160 [2024-11-29 09:36:26.631133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.160 [2024-11-29 09:36:26.631269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:59.160 [2024-11-29 09:36:26.631285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.906 ms 00:18:59.160 [2024-11-29 09:36:26.631298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.160 [2024-11-29 09:36:26.633827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.160 [2024-11-29 09:36:26.633873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:59.160 [2024-11-29 09:36:26.633882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.477 ms 00:18:59.161 [2024-11-29 09:36:26.633889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.161 [2024-11-29 09:36:26.637506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.161 [2024-11-29 09:36:26.637561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:59.161 [2024-11-29 09:36:26.637572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.582 ms 00:18:59.161 [2024-11-29 09:36:26.637580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.161 [2024-11-29 09:36:26.637711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.161 [2024-11-29 09:36:26.637721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:59.161 [2024-11-29 09:36:26.637730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:18:59.161 [2024-11-29 09:36:26.637746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.161 [2024-11-29 09:36:26.640492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.161 [2024-11-29 09:36:26.640526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:59.161 [2024-11-29 09:36:26.640535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.730 ms 00:18:59.161 [2024-11-29 09:36:26.640542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.161 [2024-11-29 09:36:26.643049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.161 [2024-11-29 09:36:26.643084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:59.161 [2024-11-29 09:36:26.643092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.475 ms 00:18:59.161 [2024-11-29 09:36:26.643099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.161 [2024-11-29 09:36:26.645076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.161 [2024-11-29 09:36:26.645206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:59.161 [2024-11-29 09:36:26.645220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.945 ms 00:18:59.161 [2024-11-29 09:36:26.645227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.161 [2024-11-29 09:36:26.646921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.161 [2024-11-29 09:36:26.646958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:59.161 [2024-11-29 09:36:26.646966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.635 ms 00:18:59.161 [2024-11-29 09:36:26.646972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.161 [2024-11-29 09:36:26.647004] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:59.161 [2024-11-29 09:36:26.647018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:59.161 [2024-11-29 09:36:26.647027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:59.161 [2024-11-29 09:36:26.647035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:59.161 [2024-11-29 09:36:26.647042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:59.161 [2024-11-29 09:36:26.647050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:59.161 [2024-11-29 09:36:26.647057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:59.161 [2024-11-29 09:36:26.647065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:59.161 [2024-11-29 09:36:26.647073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:59.161 [2024-11-29 09:36:26.647080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:59.161 [2024-11-29 09:36:26.647088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:59.161 [2024-11-29 09:36:26.647095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:59.161 [2024-11-29 09:36:26.647102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:59.161 [2024-11-29 09:36:26.647109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:59.161 [2024-11-29 09:36:26.647116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:59.161 [2024-11-29 09:36:26.647123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:59.161 [2024-11-29 09:36:26.647131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:59.161 [2024-11-29 09:36:26.647138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:59.161 [2024-11-29 09:36:26.647145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:59.161 [2024-11-29 09:36:26.647152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:59.161 [2024-11-29 09:36:26.647159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:59.161 [2024-11-29 09:36:26.647166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:59.161 [2024-11-29 09:36:26.647173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:59.161 [2024-11-29 09:36:26.647181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:59.161 [2024-11-29 09:36:26.647189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:59.161 [2024-11-29 09:36:26.647196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:59.161 [2024-11-29 09:36:26.647203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:59.161 [2024-11-29 09:36:26.647212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:59.161 [2024-11-29 09:36:26.647219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:59.161 [2024-11-29 09:36:26.647226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:59.161 [2024-11-29 09:36:26.647234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:59.161 [2024-11-29 09:36:26.647241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:59.161 [2024-11-29 09:36:26.647250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:59.161 [2024-11-29 09:36:26.647258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:59.161 [2024-11-29 09:36:26.647265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:59.161 [2024-11-29 09:36:26.647272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:59.161 [2024-11-29 09:36:26.647280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:59.161 [2024-11-29 09:36:26.647287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:59.161 [2024-11-29 09:36:26.647295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:59.161 [2024-11-29 09:36:26.647302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:59.161 [2024-11-29 09:36:26.647310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:59.161 [2024-11-29 09:36:26.647317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:59.161 [2024-11-29 09:36:26.647324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:59.161 [2024-11-29 09:36:26.647331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:59.161 [2024-11-29 09:36:26.647339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:59.161 [2024-11-29 09:36:26.647346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:59.161 [2024-11-29 09:36:26.647353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:59.161 [2024-11-29 09:36:26.647361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:59.161 [2024-11-29 09:36:26.647368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:59.161 [2024-11-29 09:36:26.647375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:59.161 [2024-11-29 09:36:26.647382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:59.161 [2024-11-29 09:36:26.647390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:59.161 [2024-11-29 09:36:26.647397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:59.161 [2024-11-29 09:36:26.647405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:59.161 [2024-11-29 09:36:26.647412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:59.161 [2024-11-29 09:36:26.647419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:59.162 [2024-11-29 09:36:26.647427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:59.162 [2024-11-29 09:36:26.647434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:59.162 [2024-11-29 09:36:26.647442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:59.162 [2024-11-29 09:36:26.647449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:59.162 [2024-11-29 09:36:26.647456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:59.162 [2024-11-29 09:36:26.647463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:59.162 [2024-11-29 09:36:26.647470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:59.162 [2024-11-29 09:36:26.647477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:59.162 [2024-11-29 09:36:26.647486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:59.162 [2024-11-29 09:36:26.647493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:59.162 [2024-11-29 09:36:26.647501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:59.162 [2024-11-29 09:36:26.647508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:59.162 [2024-11-29 09:36:26.647516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:59.162 [2024-11-29 09:36:26.647523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:59.162 [2024-11-29 09:36:26.647531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:59.162 [2024-11-29 09:36:26.647538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:59.162 [2024-11-29 09:36:26.647545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:59.162 [2024-11-29 09:36:26.647552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:59.162 [2024-11-29 09:36:26.647560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:59.162 [2024-11-29 09:36:26.647567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:59.162 [2024-11-29 09:36:26.647574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:59.162 [2024-11-29 09:36:26.647581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:59.162 [2024-11-29 09:36:26.647605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:59.162 [2024-11-29 09:36:26.647612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:59.162 [2024-11-29 09:36:26.647620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:59.162 [2024-11-29 09:36:26.647627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:59.162 [2024-11-29 09:36:26.647635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:59.162 [2024-11-29 09:36:26.647642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:59.162 [2024-11-29 09:36:26.647649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:59.162 [2024-11-29 09:36:26.647657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:59.162 [2024-11-29 09:36:26.647664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:59.162 [2024-11-29 09:36:26.647672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:59.162 [2024-11-29 09:36:26.647679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:59.162 [2024-11-29 09:36:26.647686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:59.162 [2024-11-29 09:36:26.647693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:59.162 [2024-11-29 09:36:26.647700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:59.162 [2024-11-29 09:36:26.647708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:59.162 [2024-11-29 09:36:26.647715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:59.162 [2024-11-29 09:36:26.647722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:59.162 [2024-11-29 09:36:26.647729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:59.162 [2024-11-29 09:36:26.647740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:59.162 [2024-11-29 09:36:26.647748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:59.162 [2024-11-29 09:36:26.647756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:59.162 [2024-11-29 09:36:26.647763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:59.162 [2024-11-29 09:36:26.647770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:59.162 [2024-11-29 09:36:26.647793] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:59.162 [2024-11-29 09:36:26.647801] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f68941d2-65a9-48fc-bf3a-b4b6b1215387 00:18:59.162 [2024-11-29 09:36:26.647815] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:59.162 [2024-11-29 09:36:26.647822] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:59.162 [2024-11-29 09:36:26.647829] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:59.162 [2024-11-29 09:36:26.647840] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:59.162 [2024-11-29 09:36:26.647847] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:59.162 [2024-11-29 09:36:26.647854] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:59.162 [2024-11-29 09:36:26.647863] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:59.162 [2024-11-29 09:36:26.647870] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:59.162 [2024-11-29 09:36:26.647876] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:59.162 [2024-11-29 09:36:26.647883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.162 [2024-11-29 09:36:26.647890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:59.162 [2024-11-29 09:36:26.647898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.879 ms 00:18:59.162 [2024-11-29 09:36:26.647906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.162 [2024-11-29 09:36:26.649917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.162 [2024-11-29 09:36:26.650028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:59.162 [2024-11-29 09:36:26.650080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.994 ms 00:18:59.162 [2024-11-29 09:36:26.650112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.162 [2024-11-29 09:36:26.650220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.162 [2024-11-29 09:36:26.650245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:59.162 [2024-11-29 09:36:26.650308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:59.162 [2024-11-29 09:36:26.650333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.162 [2024-11-29 09:36:26.656245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.162 [2024-11-29 09:36:26.656366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:59.162 [2024-11-29 09:36:26.656415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.162 [2024-11-29 09:36:26.656444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.162 [2024-11-29 09:36:26.656527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.162 [2024-11-29 09:36:26.656556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:59.162 [2024-11-29 09:36:26.656574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.162 [2024-11-29 09:36:26.656653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.162 [2024-11-29 09:36:26.656725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.162 [2024-11-29 09:36:26.656748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:59.162 [2024-11-29 09:36:26.656768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.162 [2024-11-29 09:36:26.656861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.162 [2024-11-29 09:36:26.656901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.162 [2024-11-29 09:36:26.656922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:59.162 [2024-11-29 09:36:26.656942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.163 [2024-11-29 09:36:26.656960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.163 [2024-11-29 09:36:26.667767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.163 [2024-11-29 09:36:26.667931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:59.163 [2024-11-29 09:36:26.667985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.163 [2024-11-29 09:36:26.668014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.163 [2024-11-29 09:36:26.676739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.163 [2024-11-29 09:36:26.676892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:59.163 [2024-11-29 09:36:26.676943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.163 [2024-11-29 09:36:26.676976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.163 [2024-11-29 09:36:26.677017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.163 [2024-11-29 09:36:26.677038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:59.163 [2024-11-29 09:36:26.677058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.163 [2024-11-29 09:36:26.677077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.163 [2024-11-29 09:36:26.677121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.163 [2024-11-29 09:36:26.677145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:59.163 [2024-11-29 09:36:26.677212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.163 [2024-11-29 09:36:26.677236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.163 [2024-11-29 09:36:26.677329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.163 [2024-11-29 09:36:26.677353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:59.163 [2024-11-29 09:36:26.677374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.163 [2024-11-29 09:36:26.677393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.163 [2024-11-29 09:36:26.677436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.163 [2024-11-29 09:36:26.677459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:59.163 [2024-11-29 09:36:26.677484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.163 [2024-11-29 09:36:26.677557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.163 [2024-11-29 09:36:26.677723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.163 [2024-11-29 09:36:26.677750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:59.163 [2024-11-29 09:36:26.677796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.163 [2024-11-29 09:36:26.677825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.163 [2024-11-29 09:36:26.677885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.163 [2024-11-29 09:36:26.677941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:59.163 [2024-11-29 09:36:26.677953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.163 [2024-11-29 09:36:26.677962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.163 [2024-11-29 09:36:26.678108] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 64.670 ms, result 0 00:18:59.425 00:18:59.425 00:18:59.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:59.686 09:36:27 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=89477 00:18:59.686 09:36:27 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:18:59.686 09:36:27 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 89477 00:18:59.686 09:36:27 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 89477 ']' 00:18:59.686 09:36:27 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:59.686 09:36:27 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:59.686 09:36:27 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:59.686 09:36:27 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:59.686 09:36:27 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:59.686 [2024-11-29 09:36:27.246842] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:18:59.686 [2024-11-29 09:36:27.246981] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89477 ] 00:18:59.686 [2024-11-29 09:36:27.382024] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:59.949 [2024-11-29 09:36:27.412312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.949 [2024-11-29 09:36:27.441841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.521 09:36:28 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:00.521 09:36:28 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:00.521 09:36:28 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:00.784 [2024-11-29 09:36:28.317729] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:00.784 [2024-11-29 09:36:28.317810] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:00.784 [2024-11-29 09:36:28.495257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.784 [2024-11-29 09:36:28.495321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:00.784 [2024-11-29 09:36:28.495338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:00.784 [2024-11-29 09:36:28.495347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.784 [2024-11-29 09:36:28.497941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.784 [2024-11-29 09:36:28.497987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:00.784 [2024-11-29 09:36:28.498000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.569 ms 00:19:00.784 [2024-11-29 09:36:28.498011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.784 [2024-11-29 09:36:28.498140] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:00.784 [2024-11-29 09:36:28.498401] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:00.784 [2024-11-29 09:36:28.498419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.784 [2024-11-29 09:36:28.498427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:00.784 [2024-11-29 09:36:28.498442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:19:00.784 [2024-11-29 09:36:28.498450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.784 [2024-11-29 09:36:28.500305] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:00.784 [2024-11-29 09:36:28.504249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.784 [2024-11-29 09:36:28.504450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:00.784 [2024-11-29 09:36:28.504470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.953 ms 00:19:00.784 [2024-11-29 09:36:28.504486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.784 [2024-11-29 09:36:28.504669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.784 [2024-11-29 09:36:28.504705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:00.784 [2024-11-29 09:36:28.504716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:19:00.784 [2024-11-29 09:36:28.504728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.047 [2024-11-29 09:36:28.512759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.047 [2024-11-29 09:36:28.512808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:01.047 [2024-11-29 09:36:28.512819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.980 ms 00:19:01.047 [2024-11-29 09:36:28.512830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.047 [2024-11-29 09:36:28.512951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.047 [2024-11-29 09:36:28.512964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:01.047 [2024-11-29 09:36:28.512977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:19:01.047 [2024-11-29 09:36:28.512987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.047 [2024-11-29 09:36:28.513020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.047 [2024-11-29 09:36:28.513030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:01.047 [2024-11-29 09:36:28.513038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:01.047 [2024-11-29 09:36:28.513048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.047 [2024-11-29 09:36:28.513073] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:01.047 [2024-11-29 09:36:28.515182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.047 [2024-11-29 09:36:28.515354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:01.047 [2024-11-29 09:36:28.515380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.109 ms 00:19:01.047 [2024-11-29 09:36:28.515389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.047 [2024-11-29 09:36:28.515436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.047 [2024-11-29 09:36:28.515445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:01.047 [2024-11-29 09:36:28.515459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:01.047 [2024-11-29 09:36:28.515467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.047 [2024-11-29 09:36:28.515489] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:01.047 [2024-11-29 09:36:28.515510] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:01.047 [2024-11-29 09:36:28.515558] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:01.047 [2024-11-29 09:36:28.515574] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:01.047 [2024-11-29 09:36:28.515703] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:01.047 [2024-11-29 09:36:28.515716] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:01.047 [2024-11-29 09:36:28.515735] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:01.047 [2024-11-29 09:36:28.515747] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:01.047 [2024-11-29 09:36:28.515761] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:01.047 [2024-11-29 09:36:28.515769] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:01.047 [2024-11-29 09:36:28.515780] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:01.047 [2024-11-29 09:36:28.515790] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:01.047 [2024-11-29 09:36:28.515801] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:01.047 [2024-11-29 09:36:28.515809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.047 [2024-11-29 09:36:28.515818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:01.047 [2024-11-29 09:36:28.515826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:19:01.047 [2024-11-29 09:36:28.515836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.047 [2024-11-29 09:36:28.515922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.047 [2024-11-29 09:36:28.515934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:01.047 [2024-11-29 09:36:28.515943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:01.047 [2024-11-29 09:36:28.515953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.047 [2024-11-29 09:36:28.516058] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:01.047 [2024-11-29 09:36:28.516071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:01.047 [2024-11-29 09:36:28.516080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:01.047 [2024-11-29 09:36:28.516093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:01.047 [2024-11-29 09:36:28.516102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:01.047 [2024-11-29 09:36:28.516114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:01.047 [2024-11-29 09:36:28.516121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:01.047 [2024-11-29 09:36:28.516131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:01.047 [2024-11-29 09:36:28.516146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:01.047 [2024-11-29 09:36:28.516157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:01.047 [2024-11-29 09:36:28.516164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:01.047 [2024-11-29 09:36:28.516174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:01.047 [2024-11-29 09:36:28.516183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:01.047 [2024-11-29 09:36:28.516192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:01.047 [2024-11-29 09:36:28.516200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:01.047 [2024-11-29 09:36:28.516209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:01.047 [2024-11-29 09:36:28.516217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:01.047 [2024-11-29 09:36:28.516227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:01.047 [2024-11-29 09:36:28.516235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:01.047 [2024-11-29 09:36:28.516251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:01.047 [2024-11-29 09:36:28.516260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:01.047 [2024-11-29 09:36:28.516270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:01.047 [2024-11-29 09:36:28.516277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:01.047 [2024-11-29 09:36:28.516287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:01.047 [2024-11-29 09:36:28.516294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:01.047 [2024-11-29 09:36:28.516304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:01.047 [2024-11-29 09:36:28.516311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:01.047 [2024-11-29 09:36:28.516320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:01.047 [2024-11-29 09:36:28.516327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:01.047 [2024-11-29 09:36:28.516335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:01.047 [2024-11-29 09:36:28.516341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:01.047 [2024-11-29 09:36:28.516352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:01.047 [2024-11-29 09:36:28.516358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:01.047 [2024-11-29 09:36:28.516367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:01.047 [2024-11-29 09:36:28.516373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:01.047 [2024-11-29 09:36:28.516384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:01.047 [2024-11-29 09:36:28.516390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:01.047 [2024-11-29 09:36:28.516398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:01.047 [2024-11-29 09:36:28.516406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:01.047 [2024-11-29 09:36:28.516413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:01.047 [2024-11-29 09:36:28.516420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:01.047 [2024-11-29 09:36:28.516428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:01.047 [2024-11-29 09:36:28.516435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:01.047 [2024-11-29 09:36:28.516445] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:01.047 [2024-11-29 09:36:28.516458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:01.047 [2024-11-29 09:36:28.516467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:01.047 [2024-11-29 09:36:28.516474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:01.047 [2024-11-29 09:36:28.516484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:01.047 [2024-11-29 09:36:28.516490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:01.047 [2024-11-29 09:36:28.516499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:01.047 [2024-11-29 09:36:28.516505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:01.048 [2024-11-29 09:36:28.516516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:01.048 [2024-11-29 09:36:28.516524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:01.048 [2024-11-29 09:36:28.516536] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:01.048 [2024-11-29 09:36:28.516547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:01.048 [2024-11-29 09:36:28.516558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:01.048 [2024-11-29 09:36:28.516565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:01.048 [2024-11-29 09:36:28.516574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:01.048 [2024-11-29 09:36:28.516581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:01.048 [2024-11-29 09:36:28.516615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:01.048 [2024-11-29 09:36:28.516623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:01.048 [2024-11-29 09:36:28.516632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:01.048 [2024-11-29 09:36:28.516639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:01.048 [2024-11-29 09:36:28.516648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:01.048 [2024-11-29 09:36:28.516656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:01.048 [2024-11-29 09:36:28.516666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:01.048 [2024-11-29 09:36:28.516673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:01.048 [2024-11-29 09:36:28.516684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:01.048 [2024-11-29 09:36:28.516692] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:01.048 [2024-11-29 09:36:28.516701] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:01.048 [2024-11-29 09:36:28.516710] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:01.048 [2024-11-29 09:36:28.516721] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:01.048 [2024-11-29 09:36:28.516729] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:01.048 [2024-11-29 09:36:28.516738] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:01.048 [2024-11-29 09:36:28.516746] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:01.048 [2024-11-29 09:36:28.516755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.048 [2024-11-29 09:36:28.516764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:01.048 [2024-11-29 09:36:28.516773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.765 ms 00:19:01.048 [2024-11-29 09:36:28.516780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.048 [2024-11-29 09:36:28.531059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.048 [2024-11-29 09:36:28.531224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:01.048 [2024-11-29 09:36:28.531302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.216 ms 00:19:01.048 [2024-11-29 09:36:28.531328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.048 [2024-11-29 09:36:28.531486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.048 [2024-11-29 09:36:28.531616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:01.048 [2024-11-29 09:36:28.531692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:19:01.048 [2024-11-29 09:36:28.531717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.048 [2024-11-29 09:36:28.544305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.048 [2024-11-29 09:36:28.544482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:01.048 [2024-11-29 09:36:28.544547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.547 ms 00:19:01.048 [2024-11-29 09:36:28.544570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.048 [2024-11-29 09:36:28.544706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.048 [2024-11-29 09:36:28.544734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:01.048 [2024-11-29 09:36:28.544827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:01.048 [2024-11-29 09:36:28.544851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.048 [2024-11-29 09:36:28.545381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.048 [2024-11-29 09:36:28.545448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:01.048 [2024-11-29 09:36:28.545473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.483 ms 00:19:01.048 [2024-11-29 09:36:28.545696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.048 [2024-11-29 09:36:28.545900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.048 [2024-11-29 09:36:28.546033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:01.048 [2024-11-29 09:36:28.546060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:19:01.048 [2024-11-29 09:36:28.546085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.048 [2024-11-29 09:36:28.554547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.048 [2024-11-29 09:36:28.554711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:01.048 [2024-11-29 09:36:28.554779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.421 ms 00:19:01.048 [2024-11-29 09:36:28.554808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.048 [2024-11-29 09:36:28.568478] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:01.048 [2024-11-29 09:36:28.568708] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:01.048 [2024-11-29 09:36:28.568801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.048 [2024-11-29 09:36:28.568824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:01.048 [2024-11-29 09:36:28.568848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.854 ms 00:19:01.048 [2024-11-29 09:36:28.568867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.048 [2024-11-29 09:36:28.588205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.048 [2024-11-29 09:36:28.588364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:01.048 [2024-11-29 09:36:28.588436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.269 ms 00:19:01.048 [2024-11-29 09:36:28.588462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.048 [2024-11-29 09:36:28.591477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.048 [2024-11-29 09:36:28.591650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:01.048 [2024-11-29 09:36:28.591714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.889 ms 00:19:01.048 [2024-11-29 09:36:28.591737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.048 [2024-11-29 09:36:28.594331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.048 [2024-11-29 09:36:28.594477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:01.048 [2024-11-29 09:36:28.594536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.531 ms 00:19:01.048 [2024-11-29 09:36:28.594559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.048 [2024-11-29 09:36:28.594964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.048 [2024-11-29 09:36:28.595193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:01.048 [2024-11-29 09:36:28.595268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:19:01.048 [2024-11-29 09:36:28.595293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.048 [2024-11-29 09:36:28.619365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.048 [2024-11-29 09:36:28.619534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:01.048 [2024-11-29 09:36:28.619626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.988 ms 00:19:01.048 [2024-11-29 09:36:28.619652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.048 [2024-11-29 09:36:28.627758] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:01.048 [2024-11-29 09:36:28.647290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.048 [2024-11-29 09:36:28.647451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:01.048 [2024-11-29 09:36:28.647511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.501 ms 00:19:01.048 [2024-11-29 09:36:28.647540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.048 [2024-11-29 09:36:28.647666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.048 [2024-11-29 09:36:28.647697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:01.048 [2024-11-29 09:36:28.647719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:01.048 [2024-11-29 09:36:28.647797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.048 [2024-11-29 09:36:28.647877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.048 [2024-11-29 09:36:28.647909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:01.048 [2024-11-29 09:36:28.647930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:01.048 [2024-11-29 09:36:28.647951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.048 [2024-11-29 09:36:28.648034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.048 [2024-11-29 09:36:28.648055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:01.049 [2024-11-29 09:36:28.648065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:01.049 [2024-11-29 09:36:28.648076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.049 [2024-11-29 09:36:28.648116] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:01.049 [2024-11-29 09:36:28.648132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.049 [2024-11-29 09:36:28.648140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:01.049 [2024-11-29 09:36:28.648152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:01.049 [2024-11-29 09:36:28.648160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.049 [2024-11-29 09:36:28.654171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.049 [2024-11-29 09:36:28.654225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:01.049 [2024-11-29 09:36:28.654239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.979 ms 00:19:01.049 [2024-11-29 09:36:28.654248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.049 [2024-11-29 09:36:28.654343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.049 [2024-11-29 09:36:28.654354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:01.049 [2024-11-29 09:36:28.654365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:19:01.049 [2024-11-29 09:36:28.654378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.049 [2024-11-29 09:36:28.655424] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:01.049 [2024-11-29 09:36:28.656820] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 159.852 ms, result 0 00:19:01.049 [2024-11-29 09:36:28.658791] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:01.049 Some configs were skipped because the RPC state that can call them passed over. 00:19:01.049 09:36:28 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:01.329 [2024-11-29 09:36:28.896570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.329 [2024-11-29 09:36:28.896782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:01.329 [2024-11-29 09:36:28.896850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.113 ms 00:19:01.329 [2024-11-29 09:36:28.896884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.329 [2024-11-29 09:36:28.896942] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.489 ms, result 0 00:19:01.329 true 00:19:01.329 09:36:28 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:01.614 [2024-11-29 09:36:29.112633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.614 [2024-11-29 09:36:29.112808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:01.614 [2024-11-29 09:36:29.112874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.837 ms 00:19:01.614 [2024-11-29 09:36:29.112898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.614 [2024-11-29 09:36:29.112958] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.166 ms, result 0 00:19:01.614 true 00:19:01.614 09:36:29 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 89477 00:19:01.614 09:36:29 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 89477 ']' 00:19:01.614 09:36:29 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 89477 00:19:01.614 09:36:29 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:01.614 09:36:29 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:01.614 09:36:29 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89477 00:19:01.614 killing process with pid 89477 00:19:01.614 09:36:29 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:01.614 09:36:29 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:01.614 09:36:29 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89477' 00:19:01.614 09:36:29 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 89477 00:19:01.614 09:36:29 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 89477 00:19:01.614 [2024-11-29 09:36:29.290985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.614 [2024-11-29 09:36:29.291048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:01.614 [2024-11-29 09:36:29.291062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:01.614 [2024-11-29 09:36:29.291074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.614 [2024-11-29 09:36:29.291099] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:01.614 [2024-11-29 09:36:29.291705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.614 [2024-11-29 09:36:29.291738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:01.614 [2024-11-29 09:36:29.291750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.588 ms 00:19:01.614 [2024-11-29 09:36:29.291758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.614 [2024-11-29 09:36:29.292042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.614 [2024-11-29 09:36:29.292059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:01.614 [2024-11-29 09:36:29.292070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:19:01.614 [2024-11-29 09:36:29.292079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.614 [2024-11-29 09:36:29.296616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.614 [2024-11-29 09:36:29.296655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:01.614 [2024-11-29 09:36:29.296669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.514 ms 00:19:01.614 [2024-11-29 09:36:29.296676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.614 [2024-11-29 09:36:29.303666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.615 [2024-11-29 09:36:29.303700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:01.615 [2024-11-29 09:36:29.303716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.945 ms 00:19:01.615 [2024-11-29 09:36:29.303727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.615 [2024-11-29 09:36:29.306325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.615 [2024-11-29 09:36:29.306471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:01.615 [2024-11-29 09:36:29.306491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.521 ms 00:19:01.615 [2024-11-29 09:36:29.306499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.615 [2024-11-29 09:36:29.310564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.615 [2024-11-29 09:36:29.310623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:01.615 [2024-11-29 09:36:29.310636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.020 ms 00:19:01.615 [2024-11-29 09:36:29.310644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.615 [2024-11-29 09:36:29.310780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.615 [2024-11-29 09:36:29.310790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:01.615 [2024-11-29 09:36:29.310800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:19:01.615 [2024-11-29 09:36:29.310808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.615 [2024-11-29 09:36:29.313741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.615 [2024-11-29 09:36:29.313777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:01.615 [2024-11-29 09:36:29.313792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.908 ms 00:19:01.615 [2024-11-29 09:36:29.313799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.615 [2024-11-29 09:36:29.316261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.615 [2024-11-29 09:36:29.316301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:01.615 [2024-11-29 09:36:29.316315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.399 ms 00:19:01.615 [2024-11-29 09:36:29.316323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.615 [2024-11-29 09:36:29.318445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.615 [2024-11-29 09:36:29.318486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:01.615 [2024-11-29 09:36:29.318498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.074 ms 00:19:01.615 [2024-11-29 09:36:29.318505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.615 [2024-11-29 09:36:29.320325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.615 [2024-11-29 09:36:29.320461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:01.615 [2024-11-29 09:36:29.320481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.747 ms 00:19:01.615 [2024-11-29 09:36:29.320488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.615 [2024-11-29 09:36:29.320526] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:01.615 [2024-11-29 09:36:29.320540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.320558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.320565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.320575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.320583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.320607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.320614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.320624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.320632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.320641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.320648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.320657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.320665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.320676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.320684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.320693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.320700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.320711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.320718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.320727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.320735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.320744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.320751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.320760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.320767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.320777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.320784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.320794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.320802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.320812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.320820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.320830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.320838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.320848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.320856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.320865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.320872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.320882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.320889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.320899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.320907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.320916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.320923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.320932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.320939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.320948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.320956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.320964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.320971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.320983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.320990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.320999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.321006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.321015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.321022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.321032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.321040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.321049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.321056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.321065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.321072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.321081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.321089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:01.615 [2024-11-29 09:36:29.321100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:01.616 [2024-11-29 09:36:29.321108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:01.616 [2024-11-29 09:36:29.321121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:01.616 [2024-11-29 09:36:29.321129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:01.616 [2024-11-29 09:36:29.321138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:01.616 [2024-11-29 09:36:29.321146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:01.616 [2024-11-29 09:36:29.321155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:01.616 [2024-11-29 09:36:29.321162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:01.616 [2024-11-29 09:36:29.321171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:01.616 [2024-11-29 09:36:29.321179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:01.616 [2024-11-29 09:36:29.321188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:01.616 [2024-11-29 09:36:29.321195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:01.616 [2024-11-29 09:36:29.321205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:01.616 [2024-11-29 09:36:29.321212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:01.616 [2024-11-29 09:36:29.321222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:01.616 [2024-11-29 09:36:29.321229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:01.616 [2024-11-29 09:36:29.321238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:01.616 [2024-11-29 09:36:29.321245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:01.616 [2024-11-29 09:36:29.321256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:01.616 [2024-11-29 09:36:29.321264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:01.616 [2024-11-29 09:36:29.321273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:01.616 [2024-11-29 09:36:29.321287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:01.616 [2024-11-29 09:36:29.321297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:01.616 [2024-11-29 09:36:29.321304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:01.616 [2024-11-29 09:36:29.321313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:01.616 [2024-11-29 09:36:29.321320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:01.616 [2024-11-29 09:36:29.321329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:01.616 [2024-11-29 09:36:29.321337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:01.616 [2024-11-29 09:36:29.321347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:01.616 [2024-11-29 09:36:29.321354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:01.616 [2024-11-29 09:36:29.321364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:01.616 [2024-11-29 09:36:29.321377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:01.616 [2024-11-29 09:36:29.321387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:01.616 [2024-11-29 09:36:29.321395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:01.616 [2024-11-29 09:36:29.321414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:01.616 [2024-11-29 09:36:29.321422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:01.616 [2024-11-29 09:36:29.321431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:01.616 [2024-11-29 09:36:29.321448] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:01.616 [2024-11-29 09:36:29.321458] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f68941d2-65a9-48fc-bf3a-b4b6b1215387 00:19:01.616 [2024-11-29 09:36:29.321465] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:01.616 [2024-11-29 09:36:29.321475] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:01.616 [2024-11-29 09:36:29.321482] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:01.616 [2024-11-29 09:36:29.321499] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:01.616 [2024-11-29 09:36:29.321506] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:01.616 [2024-11-29 09:36:29.321526] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:01.616 [2024-11-29 09:36:29.321537] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:01.616 [2024-11-29 09:36:29.321545] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:01.616 [2024-11-29 09:36:29.321552] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:01.616 [2024-11-29 09:36:29.321561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.616 [2024-11-29 09:36:29.321569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:01.616 [2024-11-29 09:36:29.321581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.037 ms 00:19:01.616 [2024-11-29 09:36:29.321602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.616 [2024-11-29 09:36:29.323449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.616 [2024-11-29 09:36:29.323470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:01.616 [2024-11-29 09:36:29.323482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.825 ms 00:19:01.616 [2024-11-29 09:36:29.323491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.616 [2024-11-29 09:36:29.323856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.616 [2024-11-29 09:36:29.323891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:01.616 [2024-11-29 09:36:29.323919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.336 ms 00:19:01.616 [2024-11-29 09:36:29.323939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.906 [2024-11-29 09:36:29.330963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.906 [2024-11-29 09:36:29.331110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:01.906 [2024-11-29 09:36:29.331168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.906 [2024-11-29 09:36:29.331191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.906 [2024-11-29 09:36:29.331281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.906 [2024-11-29 09:36:29.331305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:01.906 [2024-11-29 09:36:29.331333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.906 [2024-11-29 09:36:29.331352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.906 [2024-11-29 09:36:29.331415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.906 [2024-11-29 09:36:29.331496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:01.906 [2024-11-29 09:36:29.331522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.906 [2024-11-29 09:36:29.331542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.906 [2024-11-29 09:36:29.331576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.906 [2024-11-29 09:36:29.331621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:01.906 [2024-11-29 09:36:29.331644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.906 [2024-11-29 09:36:29.331665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.906 [2024-11-29 09:36:29.344717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.906 [2024-11-29 09:36:29.344896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:01.906 [2024-11-29 09:36:29.344955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.906 [2024-11-29 09:36:29.344978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.906 [2024-11-29 09:36:29.354789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.906 [2024-11-29 09:36:29.354959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:01.906 [2024-11-29 09:36:29.355027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.906 [2024-11-29 09:36:29.355052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.906 [2024-11-29 09:36:29.355134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.906 [2024-11-29 09:36:29.355158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:01.906 [2024-11-29 09:36:29.355182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.906 [2024-11-29 09:36:29.355201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.906 [2024-11-29 09:36:29.355247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.906 [2024-11-29 09:36:29.355267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:01.906 [2024-11-29 09:36:29.355344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.906 [2024-11-29 09:36:29.355373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.906 [2024-11-29 09:36:29.355477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.906 [2024-11-29 09:36:29.355502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:01.906 [2024-11-29 09:36:29.355524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.906 [2024-11-29 09:36:29.355542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.906 [2024-11-29 09:36:29.355606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.906 [2024-11-29 09:36:29.355630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:01.906 [2024-11-29 09:36:29.355657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.906 [2024-11-29 09:36:29.355745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.906 [2024-11-29 09:36:29.355814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.906 [2024-11-29 09:36:29.355837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:01.906 [2024-11-29 09:36:29.355859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.906 [2024-11-29 09:36:29.355878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.906 [2024-11-29 09:36:29.356004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.906 [2024-11-29 09:36:29.356032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:01.906 [2024-11-29 09:36:29.356054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.906 [2024-11-29 09:36:29.356073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.906 [2024-11-29 09:36:29.356235] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 65.221 ms, result 0 00:19:01.906 09:36:29 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:01.906 09:36:29 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:02.166 [2024-11-29 09:36:29.648666] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:19:02.166 [2024-11-29 09:36:29.649022] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89513 ] 00:19:02.166 [2024-11-29 09:36:29.783759] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:02.166 [2024-11-29 09:36:29.812636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.166 [2024-11-29 09:36:29.841683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.428 [2024-11-29 09:36:29.959055] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:02.428 [2024-11-29 09:36:29.959145] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:02.428 [2024-11-29 09:36:30.119658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.428 [2024-11-29 09:36:30.119716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:02.428 [2024-11-29 09:36:30.119736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:02.428 [2024-11-29 09:36:30.119745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.428 [2024-11-29 09:36:30.122419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.428 [2024-11-29 09:36:30.122471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:02.428 [2024-11-29 09:36:30.122483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.652 ms 00:19:02.428 [2024-11-29 09:36:30.122491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.428 [2024-11-29 09:36:30.122633] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:02.428 [2024-11-29 09:36:30.122908] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:02.428 [2024-11-29 09:36:30.122932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.428 [2024-11-29 09:36:30.122941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:02.428 [2024-11-29 09:36:30.122951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:19:02.428 [2024-11-29 09:36:30.122959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.428 [2024-11-29 09:36:30.124919] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:02.428 [2024-11-29 09:36:30.128739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.428 [2024-11-29 09:36:30.128800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:02.428 [2024-11-29 09:36:30.128812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.823 ms 00:19:02.428 [2024-11-29 09:36:30.128820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.428 [2024-11-29 09:36:30.128909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.428 [2024-11-29 09:36:30.128920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:02.428 [2024-11-29 09:36:30.128930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:19:02.428 [2024-11-29 09:36:30.128937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.428 [2024-11-29 09:36:30.137000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.428 [2024-11-29 09:36:30.137047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:02.428 [2024-11-29 09:36:30.137059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.015 ms 00:19:02.428 [2024-11-29 09:36:30.137070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.428 [2024-11-29 09:36:30.137213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.428 [2024-11-29 09:36:30.137225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:02.428 [2024-11-29 09:36:30.137234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:19:02.428 [2024-11-29 09:36:30.137245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.428 [2024-11-29 09:36:30.137275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.428 [2024-11-29 09:36:30.137289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:02.428 [2024-11-29 09:36:30.137301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:02.428 [2024-11-29 09:36:30.137309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.428 [2024-11-29 09:36:30.137331] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:02.428 [2024-11-29 09:36:30.139486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.428 [2024-11-29 09:36:30.139699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:02.428 [2024-11-29 09:36:30.139732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.161 ms 00:19:02.428 [2024-11-29 09:36:30.139745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.428 [2024-11-29 09:36:30.139804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.428 [2024-11-29 09:36:30.139818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:02.428 [2024-11-29 09:36:30.139831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:19:02.428 [2024-11-29 09:36:30.139843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.428 [2024-11-29 09:36:30.139869] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:02.428 [2024-11-29 09:36:30.139899] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:02.428 [2024-11-29 09:36:30.139942] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:02.428 [2024-11-29 09:36:30.139961] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:02.428 [2024-11-29 09:36:30.140067] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:02.428 [2024-11-29 09:36:30.140078] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:02.428 [2024-11-29 09:36:30.140089] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:02.428 [2024-11-29 09:36:30.140100] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:02.428 [2024-11-29 09:36:30.140114] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:02.428 [2024-11-29 09:36:30.140123] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:02.428 [2024-11-29 09:36:30.140131] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:02.428 [2024-11-29 09:36:30.140141] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:02.428 [2024-11-29 09:36:30.140155] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:02.428 [2024-11-29 09:36:30.140163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.428 [2024-11-29 09:36:30.140172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:02.428 [2024-11-29 09:36:30.140181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:19:02.428 [2024-11-29 09:36:30.140188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.428 [2024-11-29 09:36:30.140279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.428 [2024-11-29 09:36:30.140289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:02.428 [2024-11-29 09:36:30.140298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:02.428 [2024-11-29 09:36:30.140311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.428 [2024-11-29 09:36:30.140415] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:02.428 [2024-11-29 09:36:30.140431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:02.428 [2024-11-29 09:36:30.140440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:02.428 [2024-11-29 09:36:30.140449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:02.428 [2024-11-29 09:36:30.140464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:02.428 [2024-11-29 09:36:30.140473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:02.428 [2024-11-29 09:36:30.140483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:02.428 [2024-11-29 09:36:30.140492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:02.428 [2024-11-29 09:36:30.140501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:02.428 [2024-11-29 09:36:30.140509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:02.428 [2024-11-29 09:36:30.140518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:02.429 [2024-11-29 09:36:30.140526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:02.429 [2024-11-29 09:36:30.140534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:02.429 [2024-11-29 09:36:30.140542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:02.429 [2024-11-29 09:36:30.140550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:02.429 [2024-11-29 09:36:30.140557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:02.429 [2024-11-29 09:36:30.140566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:02.429 [2024-11-29 09:36:30.140574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:02.429 [2024-11-29 09:36:30.140580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:02.429 [2024-11-29 09:36:30.140607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:02.429 [2024-11-29 09:36:30.140615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:02.429 [2024-11-29 09:36:30.140623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:02.429 [2024-11-29 09:36:30.140638] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:02.429 [2024-11-29 09:36:30.140646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:02.429 [2024-11-29 09:36:30.140652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:02.429 [2024-11-29 09:36:30.140659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:02.429 [2024-11-29 09:36:30.140666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:02.429 [2024-11-29 09:36:30.140673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:02.429 [2024-11-29 09:36:30.140681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:02.429 [2024-11-29 09:36:30.140688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:02.429 [2024-11-29 09:36:30.140694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:02.429 [2024-11-29 09:36:30.140702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:02.429 [2024-11-29 09:36:30.140709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:02.429 [2024-11-29 09:36:30.140716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:02.429 [2024-11-29 09:36:30.140722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:02.429 [2024-11-29 09:36:30.140729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:02.429 [2024-11-29 09:36:30.140736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:02.429 [2024-11-29 09:36:30.140744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:02.429 [2024-11-29 09:36:30.140755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:02.429 [2024-11-29 09:36:30.140762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:02.429 [2024-11-29 09:36:30.140768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:02.429 [2024-11-29 09:36:30.140775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:02.429 [2024-11-29 09:36:30.140782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:02.429 [2024-11-29 09:36:30.140790] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:02.429 [2024-11-29 09:36:30.140799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:02.429 [2024-11-29 09:36:30.140806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:02.429 [2024-11-29 09:36:30.140824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:02.429 [2024-11-29 09:36:30.140833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:02.429 [2024-11-29 09:36:30.140840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:02.429 [2024-11-29 09:36:30.140847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:02.429 [2024-11-29 09:36:30.140853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:02.429 [2024-11-29 09:36:30.140861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:02.429 [2024-11-29 09:36:30.140868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:02.429 [2024-11-29 09:36:30.140876] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:02.429 [2024-11-29 09:36:30.140888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:02.429 [2024-11-29 09:36:30.140899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:02.429 [2024-11-29 09:36:30.140906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:02.429 [2024-11-29 09:36:30.140914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:02.429 [2024-11-29 09:36:30.140921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:02.429 [2024-11-29 09:36:30.140929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:02.429 [2024-11-29 09:36:30.140937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:02.429 [2024-11-29 09:36:30.140943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:02.429 [2024-11-29 09:36:30.140950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:02.429 [2024-11-29 09:36:30.140958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:02.429 [2024-11-29 09:36:30.140966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:02.429 [2024-11-29 09:36:30.140973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:02.429 [2024-11-29 09:36:30.140980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:02.429 [2024-11-29 09:36:30.140987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:02.429 [2024-11-29 09:36:30.140995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:02.429 [2024-11-29 09:36:30.141003] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:02.429 [2024-11-29 09:36:30.141013] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:02.429 [2024-11-29 09:36:30.141021] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:02.429 [2024-11-29 09:36:30.141028] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:02.429 [2024-11-29 09:36:30.141035] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:02.429 [2024-11-29 09:36:30.141042] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:02.429 [2024-11-29 09:36:30.141049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.429 [2024-11-29 09:36:30.141058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:02.429 [2024-11-29 09:36:30.141065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.703 ms 00:19:02.429 [2024-11-29 09:36:30.141072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.691 [2024-11-29 09:36:30.155381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.691 [2024-11-29 09:36:30.155569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:02.691 [2024-11-29 09:36:30.155617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.237 ms 00:19:02.691 [2024-11-29 09:36:30.155630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.691 [2024-11-29 09:36:30.155793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.691 [2024-11-29 09:36:30.155811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:02.691 [2024-11-29 09:36:30.155825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:19:02.691 [2024-11-29 09:36:30.155838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.691 [2024-11-29 09:36:30.178550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.691 [2024-11-29 09:36:30.178817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:02.691 [2024-11-29 09:36:30.178852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.677 ms 00:19:02.691 [2024-11-29 09:36:30.178884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.691 [2024-11-29 09:36:30.179026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.691 [2024-11-29 09:36:30.179049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:02.691 [2024-11-29 09:36:30.179068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:02.691 [2024-11-29 09:36:30.179086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.691 [2024-11-29 09:36:30.179708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.691 [2024-11-29 09:36:30.179765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:02.691 [2024-11-29 09:36:30.179788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.577 ms 00:19:02.691 [2024-11-29 09:36:30.179807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.691 [2024-11-29 09:36:30.180056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.691 [2024-11-29 09:36:30.180073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:02.691 [2024-11-29 09:36:30.180087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.186 ms 00:19:02.691 [2024-11-29 09:36:30.180098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.691 [2024-11-29 09:36:30.188688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.691 [2024-11-29 09:36:30.188737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:02.691 [2024-11-29 09:36:30.188753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.558 ms 00:19:02.691 [2024-11-29 09:36:30.188761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.691 [2024-11-29 09:36:30.192672] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:02.691 [2024-11-29 09:36:30.192722] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:02.691 [2024-11-29 09:36:30.192740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.691 [2024-11-29 09:36:30.192749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:02.691 [2024-11-29 09:36:30.192758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.874 ms 00:19:02.691 [2024-11-29 09:36:30.192766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.691 [2024-11-29 09:36:30.208659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.691 [2024-11-29 09:36:30.208726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:02.691 [2024-11-29 09:36:30.208739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.812 ms 00:19:02.691 [2024-11-29 09:36:30.208748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.691 [2024-11-29 09:36:30.211947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.691 [2024-11-29 09:36:30.211997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:02.691 [2024-11-29 09:36:30.212008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.106 ms 00:19:02.691 [2024-11-29 09:36:30.212015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.691 [2024-11-29 09:36:30.214882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.691 [2024-11-29 09:36:30.214931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:02.691 [2024-11-29 09:36:30.214940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.772 ms 00:19:02.691 [2024-11-29 09:36:30.214948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.691 [2024-11-29 09:36:30.215300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.691 [2024-11-29 09:36:30.215313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:02.691 [2024-11-29 09:36:30.215322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:19:02.691 [2024-11-29 09:36:30.215330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.691 [2024-11-29 09:36:30.238443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.691 [2024-11-29 09:36:30.238702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:02.691 [2024-11-29 09:36:30.238732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.090 ms 00:19:02.691 [2024-11-29 09:36:30.238747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.691 [2024-11-29 09:36:30.247149] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:02.691 [2024-11-29 09:36:30.266511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.691 [2024-11-29 09:36:30.266764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:02.691 [2024-11-29 09:36:30.266792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.567 ms 00:19:02.691 [2024-11-29 09:36:30.266805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.691 [2024-11-29 09:36:30.266923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.691 [2024-11-29 09:36:30.266944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:02.691 [2024-11-29 09:36:30.266960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:19:02.691 [2024-11-29 09:36:30.266972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.691 [2024-11-29 09:36:30.267051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.691 [2024-11-29 09:36:30.267063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:02.691 [2024-11-29 09:36:30.267075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:19:02.691 [2024-11-29 09:36:30.267087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.691 [2024-11-29 09:36:30.267134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.691 [2024-11-29 09:36:30.267149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:02.691 [2024-11-29 09:36:30.267164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:02.691 [2024-11-29 09:36:30.267177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.691 [2024-11-29 09:36:30.267216] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:02.691 [2024-11-29 09:36:30.267227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.691 [2024-11-29 09:36:30.267237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:02.691 [2024-11-29 09:36:30.267247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:02.691 [2024-11-29 09:36:30.267256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.691 [2024-11-29 09:36:30.273271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.691 [2024-11-29 09:36:30.273327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:02.691 [2024-11-29 09:36:30.273339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.991 ms 00:19:02.691 [2024-11-29 09:36:30.273356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.691 [2024-11-29 09:36:30.273454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.691 [2024-11-29 09:36:30.273472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:02.691 [2024-11-29 09:36:30.273481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:19:02.691 [2024-11-29 09:36:30.273490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.691 [2024-11-29 09:36:30.274771] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:02.691 [2024-11-29 09:36:30.276212] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 154.724 ms, result 0 00:19:02.691 [2024-11-29 09:36:30.277314] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:02.691 [2024-11-29 09:36:30.284844] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:03.634  [2024-11-29T09:36:32.300Z] Copying: 14/256 [MB] (14 MBps) [2024-11-29T09:36:33.686Z] Copying: 35/256 [MB] (21 MBps) [2024-11-29T09:36:34.630Z] Copying: 56/256 [MB] (21 MBps) [2024-11-29T09:36:35.578Z] Copying: 67/256 [MB] (10 MBps) [2024-11-29T09:36:36.523Z] Copying: 77/256 [MB] (10 MBps) [2024-11-29T09:36:37.468Z] Copying: 90/256 [MB] (12 MBps) [2024-11-29T09:36:38.415Z] Copying: 101/256 [MB] (10 MBps) [2024-11-29T09:36:39.361Z] Copying: 111/256 [MB] (10 MBps) [2024-11-29T09:36:40.308Z] Copying: 121/256 [MB] (10 MBps) [2024-11-29T09:36:41.697Z] Copying: 131/256 [MB] (10 MBps) [2024-11-29T09:36:42.642Z] Copying: 143/256 [MB] (12 MBps) [2024-11-29T09:36:43.585Z] Copying: 157/256 [MB] (13 MBps) [2024-11-29T09:36:44.530Z] Copying: 168/256 [MB] (11 MBps) [2024-11-29T09:36:45.472Z] Copying: 178/256 [MB] (10 MBps) [2024-11-29T09:36:46.414Z] Copying: 196/256 [MB] (17 MBps) [2024-11-29T09:36:47.360Z] Copying: 217/256 [MB] (21 MBps) [2024-11-29T09:36:48.303Z] Copying: 234/256 [MB] (17 MBps) [2024-11-29T09:36:48.565Z] Copying: 254/256 [MB] (19 MBps) [2024-11-29T09:36:48.565Z] Copying: 256/256 [MB] (average 14 MBps)[2024-11-29 09:36:48.375507] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:20.839 [2024-11-29 09:36:48.377492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.839 [2024-11-29 09:36:48.377722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:20.839 [2024-11-29 09:36:48.377754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:20.839 [2024-11-29 09:36:48.377768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.839 [2024-11-29 09:36:48.377811] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:20.839 [2024-11-29 09:36:48.378553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.839 [2024-11-29 09:36:48.378622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:20.839 [2024-11-29 09:36:48.378638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.718 ms 00:19:20.839 [2024-11-29 09:36:48.378650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.839 [2024-11-29 09:36:48.379007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.839 [2024-11-29 09:36:48.379053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:20.839 [2024-11-29 09:36:48.379069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:19:20.839 [2024-11-29 09:36:48.379081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.839 [2024-11-29 09:36:48.382882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.839 [2024-11-29 09:36:48.382910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:20.839 [2024-11-29 09:36:48.382933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.778 ms 00:19:20.839 [2024-11-29 09:36:48.382945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.839 [2024-11-29 09:36:48.389992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.839 [2024-11-29 09:36:48.390039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:20.839 [2024-11-29 09:36:48.390063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.013 ms 00:19:20.839 [2024-11-29 09:36:48.390074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.839 [2024-11-29 09:36:48.392761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.839 [2024-11-29 09:36:48.392936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:20.839 [2024-11-29 09:36:48.392959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.596 ms 00:19:20.839 [2024-11-29 09:36:48.392970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.839 [2024-11-29 09:36:48.397372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.839 [2024-11-29 09:36:48.397424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:20.840 [2024-11-29 09:36:48.397451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.349 ms 00:19:20.840 [2024-11-29 09:36:48.397462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.840 [2024-11-29 09:36:48.397683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.840 [2024-11-29 09:36:48.397704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:20.840 [2024-11-29 09:36:48.397731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.162 ms 00:19:20.840 [2024-11-29 09:36:48.397744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.840 [2024-11-29 09:36:48.401173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.840 [2024-11-29 09:36:48.401222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:20.840 [2024-11-29 09:36:48.401236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.402 ms 00:19:20.840 [2024-11-29 09:36:48.401246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.840 [2024-11-29 09:36:48.404063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.840 [2024-11-29 09:36:48.404225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:20.840 [2024-11-29 09:36:48.404247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.761 ms 00:19:20.840 [2024-11-29 09:36:48.404256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.840 [2024-11-29 09:36:48.406809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.840 [2024-11-29 09:36:48.406861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:20.840 [2024-11-29 09:36:48.406875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.462 ms 00:19:20.840 [2024-11-29 09:36:48.406886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.840 [2024-11-29 09:36:48.408911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.840 [2024-11-29 09:36:48.408962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:20.840 [2024-11-29 09:36:48.408977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.934 ms 00:19:20.840 [2024-11-29 09:36:48.408987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.840 [2024-11-29 09:36:48.409039] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:20.840 [2024-11-29 09:36:48.409060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:20.840 [2024-11-29 09:36:48.409074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:20.840 [2024-11-29 09:36:48.409087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:20.840 [2024-11-29 09:36:48.409100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:20.840 [2024-11-29 09:36:48.409113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:20.840 [2024-11-29 09:36:48.409125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:20.840 [2024-11-29 09:36:48.409139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:20.840 [2024-11-29 09:36:48.409152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:20.840 [2024-11-29 09:36:48.409165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:20.840 [2024-11-29 09:36:48.409178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:20.840 [2024-11-29 09:36:48.409191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:20.840 [2024-11-29 09:36:48.409204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:20.840 [2024-11-29 09:36:48.409216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:20.840 [2024-11-29 09:36:48.409229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:20.840 [2024-11-29 09:36:48.409241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:20.840 [2024-11-29 09:36:48.409254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:20.840 [2024-11-29 09:36:48.409267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:20.840 [2024-11-29 09:36:48.409279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:20.840 [2024-11-29 09:36:48.409293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:20.840 [2024-11-29 09:36:48.409306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:20.840 [2024-11-29 09:36:48.409318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:20.840 [2024-11-29 09:36:48.409332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:20.840 [2024-11-29 09:36:48.409345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:20.840 [2024-11-29 09:36:48.409358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:20.840 [2024-11-29 09:36:48.409371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:20.840 [2024-11-29 09:36:48.409384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:20.840 [2024-11-29 09:36:48.409399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:20.840 [2024-11-29 09:36:48.409412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:20.840 [2024-11-29 09:36:48.409424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:20.840 [2024-11-29 09:36:48.409437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:20.840 [2024-11-29 09:36:48.409450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:20.840 [2024-11-29 09:36:48.409462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:20.840 [2024-11-29 09:36:48.409474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:20.840 [2024-11-29 09:36:48.409488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:20.840 [2024-11-29 09:36:48.409501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:20.840 [2024-11-29 09:36:48.409513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:20.840 [2024-11-29 09:36:48.409525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:20.840 [2024-11-29 09:36:48.409564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:20.840 [2024-11-29 09:36:48.409577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:20.840 [2024-11-29 09:36:48.409613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:20.840 [2024-11-29 09:36:48.409627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:20.840 [2024-11-29 09:36:48.409643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:20.840 [2024-11-29 09:36:48.409657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:20.840 [2024-11-29 09:36:48.409670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:20.840 [2024-11-29 09:36:48.409684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:20.840 [2024-11-29 09:36:48.409696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:20.840 [2024-11-29 09:36:48.409710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:20.840 [2024-11-29 09:36:48.409723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:20.840 [2024-11-29 09:36:48.409736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:20.840 [2024-11-29 09:36:48.409749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:20.841 [2024-11-29 09:36:48.409762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:20.841 [2024-11-29 09:36:48.409775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:20.841 [2024-11-29 09:36:48.409788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:20.841 [2024-11-29 09:36:48.409801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:20.841 [2024-11-29 09:36:48.409814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:20.841 [2024-11-29 09:36:48.409829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:20.841 [2024-11-29 09:36:48.409842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:20.841 [2024-11-29 09:36:48.409856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:20.841 [2024-11-29 09:36:48.409868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:20.841 [2024-11-29 09:36:48.409881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:20.841 [2024-11-29 09:36:48.409895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:20.841 [2024-11-29 09:36:48.409908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:20.841 [2024-11-29 09:36:48.409920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:20.841 [2024-11-29 09:36:48.409934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:20.841 [2024-11-29 09:36:48.409947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:20.841 [2024-11-29 09:36:48.409960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:20.841 [2024-11-29 09:36:48.409973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:20.841 [2024-11-29 09:36:48.409986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:20.841 [2024-11-29 09:36:48.409999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:20.841 [2024-11-29 09:36:48.410011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:20.841 [2024-11-29 09:36:48.410025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:20.841 [2024-11-29 09:36:48.410040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:20.841 [2024-11-29 09:36:48.410053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:20.841 [2024-11-29 09:36:48.410067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:20.841 [2024-11-29 09:36:48.410080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:20.841 [2024-11-29 09:36:48.410094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:20.841 [2024-11-29 09:36:48.410106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:20.841 [2024-11-29 09:36:48.410120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:20.841 [2024-11-29 09:36:48.410133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:20.841 [2024-11-29 09:36:48.410146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:20.841 [2024-11-29 09:36:48.410159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:20.841 [2024-11-29 09:36:48.410171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:20.841 [2024-11-29 09:36:48.410184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:20.841 [2024-11-29 09:36:48.410196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:20.841 [2024-11-29 09:36:48.410209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:20.841 [2024-11-29 09:36:48.410222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:20.841 [2024-11-29 09:36:48.410235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:20.841 [2024-11-29 09:36:48.410248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:20.841 [2024-11-29 09:36:48.410260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:20.841 [2024-11-29 09:36:48.410272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:20.841 [2024-11-29 09:36:48.410285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:20.841 [2024-11-29 09:36:48.410298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:20.841 [2024-11-29 09:36:48.410311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:20.841 [2024-11-29 09:36:48.410323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:20.841 [2024-11-29 09:36:48.410336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:20.841 [2024-11-29 09:36:48.410348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:20.841 [2024-11-29 09:36:48.410360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:20.841 [2024-11-29 09:36:48.410373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:20.841 [2024-11-29 09:36:48.410386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:20.841 [2024-11-29 09:36:48.410409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:20.841 [2024-11-29 09:36:48.410435] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:20.841 [2024-11-29 09:36:48.410458] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f68941d2-65a9-48fc-bf3a-b4b6b1215387 00:19:20.841 [2024-11-29 09:36:48.410471] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:20.841 [2024-11-29 09:36:48.410484] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:20.841 [2024-11-29 09:36:48.410504] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:20.841 [2024-11-29 09:36:48.410519] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:20.841 [2024-11-29 09:36:48.410536] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:20.841 [2024-11-29 09:36:48.410554] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:20.841 [2024-11-29 09:36:48.410571] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:20.841 [2024-11-29 09:36:48.410597] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:20.841 [2024-11-29 09:36:48.410610] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:20.841 [2024-11-29 09:36:48.410622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.841 [2024-11-29 09:36:48.410635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:20.841 [2024-11-29 09:36:48.410650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.584 ms 00:19:20.841 [2024-11-29 09:36:48.410663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.841 [2024-11-29 09:36:48.413108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.842 [2024-11-29 09:36:48.413148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:20.842 [2024-11-29 09:36:48.413166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.389 ms 00:19:20.842 [2024-11-29 09:36:48.413188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.842 [2024-11-29 09:36:48.413324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.842 [2024-11-29 09:36:48.413338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:20.842 [2024-11-29 09:36:48.413353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:19:20.842 [2024-11-29 09:36:48.413366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.842 [2024-11-29 09:36:48.421265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.842 [2024-11-29 09:36:48.421318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:20.842 [2024-11-29 09:36:48.421341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.842 [2024-11-29 09:36:48.421352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.842 [2024-11-29 09:36:48.421483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.842 [2024-11-29 09:36:48.421499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:20.842 [2024-11-29 09:36:48.421513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.842 [2024-11-29 09:36:48.421526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.842 [2024-11-29 09:36:48.421646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.842 [2024-11-29 09:36:48.421663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:20.842 [2024-11-29 09:36:48.421676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.842 [2024-11-29 09:36:48.421695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.842 [2024-11-29 09:36:48.421722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.842 [2024-11-29 09:36:48.421754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:20.842 [2024-11-29 09:36:48.421775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.842 [2024-11-29 09:36:48.421787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.842 [2024-11-29 09:36:48.435431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.842 [2024-11-29 09:36:48.435492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:20.842 [2024-11-29 09:36:48.435516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.842 [2024-11-29 09:36:48.435534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.842 [2024-11-29 09:36:48.446446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.842 [2024-11-29 09:36:48.446508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:20.842 [2024-11-29 09:36:48.446526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.842 [2024-11-29 09:36:48.446538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.842 [2024-11-29 09:36:48.446628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.842 [2024-11-29 09:36:48.446644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:20.842 [2024-11-29 09:36:48.446658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.842 [2024-11-29 09:36:48.446670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.842 [2024-11-29 09:36:48.446719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.842 [2024-11-29 09:36:48.446735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:20.842 [2024-11-29 09:36:48.446749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.842 [2024-11-29 09:36:48.446763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.842 [2024-11-29 09:36:48.446885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.842 [2024-11-29 09:36:48.446902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:20.842 [2024-11-29 09:36:48.446916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.842 [2024-11-29 09:36:48.446931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.842 [2024-11-29 09:36:48.446979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.842 [2024-11-29 09:36:48.446998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:20.842 [2024-11-29 09:36:48.447013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.842 [2024-11-29 09:36:48.447025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.842 [2024-11-29 09:36:48.447087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.842 [2024-11-29 09:36:48.447102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:20.842 [2024-11-29 09:36:48.447116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.842 [2024-11-29 09:36:48.447129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.842 [2024-11-29 09:36:48.447203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.842 [2024-11-29 09:36:48.447218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:20.842 [2024-11-29 09:36:48.447232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.842 [2024-11-29 09:36:48.447245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.842 [2024-11-29 09:36:48.447461] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 69.925 ms, result 0 00:19:21.102 00:19:21.102 00:19:21.102 09:36:48 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:19:21.102 09:36:48 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:21.671 09:36:49 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:21.671 [2024-11-29 09:36:49.314353] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:19:21.671 [2024-11-29 09:36:49.314498] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89722 ] 00:19:21.932 [2024-11-29 09:36:49.448794] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:21.932 [2024-11-29 09:36:49.471704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.932 [2024-11-29 09:36:49.499895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:21.932 [2024-11-29 09:36:49.614928] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:21.932 [2024-11-29 09:36:49.615162] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:22.194 [2024-11-29 09:36:49.774990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.195 [2024-11-29 09:36:49.775045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:22.195 [2024-11-29 09:36:49.775060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:22.195 [2024-11-29 09:36:49.775074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.195 [2024-11-29 09:36:49.777616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.195 [2024-11-29 09:36:49.777665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:22.195 [2024-11-29 09:36:49.777681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.521 ms 00:19:22.195 [2024-11-29 09:36:49.777689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.195 [2024-11-29 09:36:49.777791] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:22.195 [2024-11-29 09:36:49.778050] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:22.195 [2024-11-29 09:36:49.778066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.195 [2024-11-29 09:36:49.778075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:22.195 [2024-11-29 09:36:49.778084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:19:22.195 [2024-11-29 09:36:49.778092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.195 [2024-11-29 09:36:49.780008] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:22.195 [2024-11-29 09:36:49.784162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.195 [2024-11-29 09:36:49.784348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:22.195 [2024-11-29 09:36:49.784799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.155 ms 00:19:22.195 [2024-11-29 09:36:49.784826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.195 [2024-11-29 09:36:49.784955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.195 [2024-11-29 09:36:49.784968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:22.195 [2024-11-29 09:36:49.784979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:19:22.195 [2024-11-29 09:36:49.784987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.195 [2024-11-29 09:36:49.793023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.195 [2024-11-29 09:36:49.793066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:22.195 [2024-11-29 09:36:49.793077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.977 ms 00:19:22.195 [2024-11-29 09:36:49.793089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.195 [2024-11-29 09:36:49.793233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.195 [2024-11-29 09:36:49.793249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:22.195 [2024-11-29 09:36:49.793260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:19:22.195 [2024-11-29 09:36:49.793268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.195 [2024-11-29 09:36:49.793299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.195 [2024-11-29 09:36:49.793307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:22.195 [2024-11-29 09:36:49.793316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:22.195 [2024-11-29 09:36:49.793323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.195 [2024-11-29 09:36:49.793345] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:22.195 [2024-11-29 09:36:49.795424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.195 [2024-11-29 09:36:49.795468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:22.195 [2024-11-29 09:36:49.795478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.084 ms 00:19:22.195 [2024-11-29 09:36:49.795489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.195 [2024-11-29 09:36:49.795534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.195 [2024-11-29 09:36:49.795544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:22.195 [2024-11-29 09:36:49.795556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:22.195 [2024-11-29 09:36:49.795563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.195 [2024-11-29 09:36:49.795581] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:22.195 [2024-11-29 09:36:49.795632] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:22.195 [2024-11-29 09:36:49.795674] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:22.195 [2024-11-29 09:36:49.795694] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:22.195 [2024-11-29 09:36:49.795801] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:22.195 [2024-11-29 09:36:49.795812] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:22.195 [2024-11-29 09:36:49.795823] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:22.195 [2024-11-29 09:36:49.795834] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:22.195 [2024-11-29 09:36:49.795844] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:22.195 [2024-11-29 09:36:49.795852] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:22.195 [2024-11-29 09:36:49.795864] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:22.195 [2024-11-29 09:36:49.795876] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:22.195 [2024-11-29 09:36:49.795886] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:22.195 [2024-11-29 09:36:49.795895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.195 [2024-11-29 09:36:49.795903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:22.195 [2024-11-29 09:36:49.795911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:19:22.195 [2024-11-29 09:36:49.795918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.195 [2024-11-29 09:36:49.796006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.195 [2024-11-29 09:36:49.796015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:22.195 [2024-11-29 09:36:49.796023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:22.195 [2024-11-29 09:36:49.796030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.195 [2024-11-29 09:36:49.796131] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:22.195 [2024-11-29 09:36:49.796142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:22.195 [2024-11-29 09:36:49.796151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:22.195 [2024-11-29 09:36:49.796164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.195 [2024-11-29 09:36:49.796178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:22.195 [2024-11-29 09:36:49.796186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:22.195 [2024-11-29 09:36:49.796199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:22.195 [2024-11-29 09:36:49.796207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:22.195 [2024-11-29 09:36:49.796215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:22.195 [2024-11-29 09:36:49.796224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:22.195 [2024-11-29 09:36:49.796232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:22.195 [2024-11-29 09:36:49.796239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:22.195 [2024-11-29 09:36:49.796248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:22.195 [2024-11-29 09:36:49.796257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:22.195 [2024-11-29 09:36:49.796265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:22.195 [2024-11-29 09:36:49.796274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.195 [2024-11-29 09:36:49.796282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:22.195 [2024-11-29 09:36:49.796290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:22.195 [2024-11-29 09:36:49.796299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.195 [2024-11-29 09:36:49.796307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:22.195 [2024-11-29 09:36:49.796315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:22.195 [2024-11-29 09:36:49.796322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:22.195 [2024-11-29 09:36:49.796335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:22.195 [2024-11-29 09:36:49.796344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:22.195 [2024-11-29 09:36:49.796351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:22.195 [2024-11-29 09:36:49.796359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:22.195 [2024-11-29 09:36:49.796367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:22.195 [2024-11-29 09:36:49.796374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:22.195 [2024-11-29 09:36:49.796382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:22.195 [2024-11-29 09:36:49.796390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:22.195 [2024-11-29 09:36:49.796399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:22.195 [2024-11-29 09:36:49.796406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:22.195 [2024-11-29 09:36:49.796413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:22.195 [2024-11-29 09:36:49.796422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:22.195 [2024-11-29 09:36:49.796430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:22.195 [2024-11-29 09:36:49.796438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:22.195 [2024-11-29 09:36:49.796446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:22.195 [2024-11-29 09:36:49.796453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:22.195 [2024-11-29 09:36:49.796463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:22.196 [2024-11-29 09:36:49.796470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.196 [2024-11-29 09:36:49.796479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:22.196 [2024-11-29 09:36:49.796487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:22.196 [2024-11-29 09:36:49.796495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.196 [2024-11-29 09:36:49.796502] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:22.196 [2024-11-29 09:36:49.796511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:22.196 [2024-11-29 09:36:49.796520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:22.196 [2024-11-29 09:36:49.796533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.196 [2024-11-29 09:36:49.796544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:22.196 [2024-11-29 09:36:49.796553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:22.196 [2024-11-29 09:36:49.796561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:22.196 [2024-11-29 09:36:49.796570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:22.196 [2024-11-29 09:36:49.796579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:22.196 [2024-11-29 09:36:49.796601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:22.196 [2024-11-29 09:36:49.796611] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:22.196 [2024-11-29 09:36:49.796622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:22.196 [2024-11-29 09:36:49.796634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:22.196 [2024-11-29 09:36:49.796642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:22.196 [2024-11-29 09:36:49.796650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:22.196 [2024-11-29 09:36:49.796657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:22.196 [2024-11-29 09:36:49.796665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:22.196 [2024-11-29 09:36:49.796672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:22.196 [2024-11-29 09:36:49.796679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:22.196 [2024-11-29 09:36:49.796686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:22.196 [2024-11-29 09:36:49.796694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:22.196 [2024-11-29 09:36:49.796701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:22.196 [2024-11-29 09:36:49.796708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:22.196 [2024-11-29 09:36:49.796715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:22.196 [2024-11-29 09:36:49.796723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:22.196 [2024-11-29 09:36:49.796730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:22.196 [2024-11-29 09:36:49.796738] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:22.196 [2024-11-29 09:36:49.796749] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:22.196 [2024-11-29 09:36:49.796758] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:22.196 [2024-11-29 09:36:49.796765] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:22.196 [2024-11-29 09:36:49.796772] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:22.196 [2024-11-29 09:36:49.796779] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:22.196 [2024-11-29 09:36:49.796786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.196 [2024-11-29 09:36:49.796794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:22.196 [2024-11-29 09:36:49.796801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.723 ms 00:19:22.196 [2024-11-29 09:36:49.796808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.196 [2024-11-29 09:36:49.810574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.196 [2024-11-29 09:36:49.810635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:22.196 [2024-11-29 09:36:49.810647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.712 ms 00:19:22.196 [2024-11-29 09:36:49.810662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.196 [2024-11-29 09:36:49.810801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.196 [2024-11-29 09:36:49.810812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:22.196 [2024-11-29 09:36:49.810821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:19:22.196 [2024-11-29 09:36:49.810829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.196 [2024-11-29 09:36:49.831380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.196 [2024-11-29 09:36:49.831430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:22.196 [2024-11-29 09:36:49.831449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.526 ms 00:19:22.196 [2024-11-29 09:36:49.831461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.196 [2024-11-29 09:36:49.831559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.196 [2024-11-29 09:36:49.831571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:22.196 [2024-11-29 09:36:49.831581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:22.196 [2024-11-29 09:36:49.831613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.196 [2024-11-29 09:36:49.832121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.196 [2024-11-29 09:36:49.832153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:22.196 [2024-11-29 09:36:49.832164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.483 ms 00:19:22.196 [2024-11-29 09:36:49.832172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.196 [2024-11-29 09:36:49.832333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.196 [2024-11-29 09:36:49.832413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:22.196 [2024-11-29 09:36:49.832426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:19:22.196 [2024-11-29 09:36:49.832435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.196 [2024-11-29 09:36:49.840647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.196 [2024-11-29 09:36:49.840698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:22.196 [2024-11-29 09:36:49.840708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.186 ms 00:19:22.196 [2024-11-29 09:36:49.840719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.196 [2024-11-29 09:36:49.844573] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:22.196 [2024-11-29 09:36:49.844638] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:22.196 [2024-11-29 09:36:49.844650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.196 [2024-11-29 09:36:49.844659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:22.196 [2024-11-29 09:36:49.844668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.829 ms 00:19:22.196 [2024-11-29 09:36:49.844675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.196 [2024-11-29 09:36:49.860378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.196 [2024-11-29 09:36:49.860424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:22.196 [2024-11-29 09:36:49.860445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.637 ms 00:19:22.196 [2024-11-29 09:36:49.860452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.196 [2024-11-29 09:36:49.863424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.196 [2024-11-29 09:36:49.863616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:22.196 [2024-11-29 09:36:49.863635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.870 ms 00:19:22.196 [2024-11-29 09:36:49.863643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.196 [2024-11-29 09:36:49.866339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.196 [2024-11-29 09:36:49.866386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:22.196 [2024-11-29 09:36:49.866396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.648 ms 00:19:22.196 [2024-11-29 09:36:49.866403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.196 [2024-11-29 09:36:49.866780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.196 [2024-11-29 09:36:49.866794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:22.196 [2024-11-29 09:36:49.866804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:19:22.196 [2024-11-29 09:36:49.866812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.196 [2024-11-29 09:36:49.889293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.196 [2024-11-29 09:36:49.889351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:22.196 [2024-11-29 09:36:49.889364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.428 ms 00:19:22.196 [2024-11-29 09:36:49.889380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.196 [2024-11-29 09:36:49.898202] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:22.460 [2024-11-29 09:36:49.918526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.460 [2024-11-29 09:36:49.918578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:22.460 [2024-11-29 09:36:49.918607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.054 ms 00:19:22.460 [2024-11-29 09:36:49.918616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.460 [2024-11-29 09:36:49.918727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.460 [2024-11-29 09:36:49.918738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:22.460 [2024-11-29 09:36:49.918749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:22.460 [2024-11-29 09:36:49.918757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.460 [2024-11-29 09:36:49.918820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.460 [2024-11-29 09:36:49.918835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:22.460 [2024-11-29 09:36:49.918844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:22.460 [2024-11-29 09:36:49.918852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.460 [2024-11-29 09:36:49.918881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.460 [2024-11-29 09:36:49.918893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:22.460 [2024-11-29 09:36:49.918902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:22.460 [2024-11-29 09:36:49.918910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.460 [2024-11-29 09:36:49.918949] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:22.460 [2024-11-29 09:36:49.918964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.460 [2024-11-29 09:36:49.918972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:22.460 [2024-11-29 09:36:49.918981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:22.460 [2024-11-29 09:36:49.918990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.460 [2024-11-29 09:36:49.925023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.460 [2024-11-29 09:36:49.925070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:22.460 [2024-11-29 09:36:49.925090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.010 ms 00:19:22.460 [2024-11-29 09:36:49.925099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.460 [2024-11-29 09:36:49.925195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.460 [2024-11-29 09:36:49.925206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:22.460 [2024-11-29 09:36:49.925216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:19:22.460 [2024-11-29 09:36:49.925224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.460 [2024-11-29 09:36:49.926347] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:22.460 [2024-11-29 09:36:49.927771] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 151.042 ms, result 0 00:19:22.460 [2024-11-29 09:36:49.928760] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:22.460 [2024-11-29 09:36:49.936426] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:22.460  [2024-11-29T09:36:50.186Z] Copying: 4096/4096 [kB] (average 19 MBps)[2024-11-29 09:36:50.140063] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:22.460 [2024-11-29 09:36:50.142013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.460 [2024-11-29 09:36:50.142065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:22.460 [2024-11-29 09:36:50.142081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:22.460 [2024-11-29 09:36:50.142090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.460 [2024-11-29 09:36:50.142113] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:22.460 [2024-11-29 09:36:50.142802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.460 [2024-11-29 09:36:50.142888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:22.460 [2024-11-29 09:36:50.142903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.674 ms 00:19:22.460 [2024-11-29 09:36:50.142912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.460 [2024-11-29 09:36:50.145869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.460 [2024-11-29 09:36:50.145926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:22.460 [2024-11-29 09:36:50.145941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.918 ms 00:19:22.460 [2024-11-29 09:36:50.145950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.460 [2024-11-29 09:36:50.150443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.460 [2024-11-29 09:36:50.150480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:22.460 [2024-11-29 09:36:50.150490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.466 ms 00:19:22.460 [2024-11-29 09:36:50.150498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.460 [2024-11-29 09:36:50.157476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.460 [2024-11-29 09:36:50.157679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:22.460 [2024-11-29 09:36:50.157699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.934 ms 00:19:22.460 [2024-11-29 09:36:50.157707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.460 [2024-11-29 09:36:50.160312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.460 [2024-11-29 09:36:50.160356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:22.460 [2024-11-29 09:36:50.160367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.542 ms 00:19:22.460 [2024-11-29 09:36:50.160374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.460 [2024-11-29 09:36:50.164651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.460 [2024-11-29 09:36:50.164695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:22.460 [2024-11-29 09:36:50.164716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.233 ms 00:19:22.460 [2024-11-29 09:36:50.164724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.460 [2024-11-29 09:36:50.164864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.460 [2024-11-29 09:36:50.164878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:22.460 [2024-11-29 09:36:50.164887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:19:22.460 [2024-11-29 09:36:50.164895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.460 [2024-11-29 09:36:50.168063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.460 [2024-11-29 09:36:50.168223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:22.460 [2024-11-29 09:36:50.168240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.150 ms 00:19:22.460 [2024-11-29 09:36:50.168247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.460 [2024-11-29 09:36:50.170878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.460 [2024-11-29 09:36:50.170923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:22.460 [2024-11-29 09:36:50.170933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.592 ms 00:19:22.460 [2024-11-29 09:36:50.170940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.460 [2024-11-29 09:36:50.172511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.460 [2024-11-29 09:36:50.172559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:22.460 [2024-11-29 09:36:50.172568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.530 ms 00:19:22.460 [2024-11-29 09:36:50.172576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.460 [2024-11-29 09:36:50.173922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.460 [2024-11-29 09:36:50.173966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:22.460 [2024-11-29 09:36:50.173975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.260 ms 00:19:22.460 [2024-11-29 09:36:50.173982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.460 [2024-11-29 09:36:50.174024] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:22.460 [2024-11-29 09:36:50.174040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:22.460 [2024-11-29 09:36:50.174060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:22.460 [2024-11-29 09:36:50.174068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:22.460 [2024-11-29 09:36:50.174075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:22.460 [2024-11-29 09:36:50.174083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:22.460 [2024-11-29 09:36:50.174091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:22.460 [2024-11-29 09:36:50.174099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:22.460 [2024-11-29 09:36:50.174106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:22.460 [2024-11-29 09:36:50.174114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:22.460 [2024-11-29 09:36:50.174122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:22.460 [2024-11-29 09:36:50.174129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:22.460 [2024-11-29 09:36:50.174138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:22.460 [2024-11-29 09:36:50.174145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:22.460 [2024-11-29 09:36:50.174152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:22.460 [2024-11-29 09:36:50.174160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:22.460 [2024-11-29 09:36:50.174167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:22.460 [2024-11-29 09:36:50.174175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.174183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.174190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.174197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.174204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.174211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.174218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.174226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.174233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.174240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.174248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.174256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.174263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.174270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.174278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.174285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.174293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.174301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.174308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.174315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.174325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.174334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.174342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.174349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.174357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.174364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.174372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.174379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.174386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.174393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.174401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.174408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.174415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.174423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.174430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.174437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.174444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.174451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.174459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.174467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.174474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.174481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.174488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.174495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.174503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.174510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.174518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.174524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.174531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.174538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.174546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.174554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.174562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.174570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.174577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.174912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.174974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.175003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.175032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.175061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.175163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.175193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.175222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.175251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.175280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.175426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.175434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.175441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.175449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.175457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.175464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.175472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.175480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.175487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.175494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.175502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.175510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.175517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.175525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.175532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.175540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.175547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.175554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.175572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:22.461 [2024-11-29 09:36:50.175606] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:22.461 [2024-11-29 09:36:50.175615] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f68941d2-65a9-48fc-bf3a-b4b6b1215387 00:19:22.461 [2024-11-29 09:36:50.175625] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:22.461 [2024-11-29 09:36:50.175633] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:22.461 [2024-11-29 09:36:50.175645] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:22.461 [2024-11-29 09:36:50.175656] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:22.461 [2024-11-29 09:36:50.175664] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:22.461 [2024-11-29 09:36:50.175672] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:22.462 [2024-11-29 09:36:50.175680] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:22.462 [2024-11-29 09:36:50.175686] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:22.462 [2024-11-29 09:36:50.175693] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:22.462 [2024-11-29 09:36:50.175702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.462 [2024-11-29 09:36:50.175710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:22.462 [2024-11-29 09:36:50.175720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.680 ms 00:19:22.462 [2024-11-29 09:36:50.175728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.462 [2024-11-29 09:36:50.178148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.462 [2024-11-29 09:36:50.178186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:22.462 [2024-11-29 09:36:50.178200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.393 ms 00:19:22.462 [2024-11-29 09:36:50.178208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.462 [2024-11-29 09:36:50.178324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.462 [2024-11-29 09:36:50.178333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:22.462 [2024-11-29 09:36:50.178347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:19:22.462 [2024-11-29 09:36:50.178358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.722 [2024-11-29 09:36:50.186130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.723 [2024-11-29 09:36:50.186286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:22.723 [2024-11-29 09:36:50.186339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.723 [2024-11-29 09:36:50.186361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.723 [2024-11-29 09:36:50.186457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.723 [2024-11-29 09:36:50.186480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:22.723 [2024-11-29 09:36:50.186500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.723 [2024-11-29 09:36:50.186566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.723 [2024-11-29 09:36:50.186658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.723 [2024-11-29 09:36:50.186741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:22.723 [2024-11-29 09:36:50.186806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.723 [2024-11-29 09:36:50.186816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.723 [2024-11-29 09:36:50.186837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.723 [2024-11-29 09:36:50.186846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:22.723 [2024-11-29 09:36:50.186854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.723 [2024-11-29 09:36:50.186863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.723 [2024-11-29 09:36:50.200451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.723 [2024-11-29 09:36:50.200513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:22.723 [2024-11-29 09:36:50.200524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.723 [2024-11-29 09:36:50.200533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.723 [2024-11-29 09:36:50.211480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.723 [2024-11-29 09:36:50.211536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:22.723 [2024-11-29 09:36:50.211548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.723 [2024-11-29 09:36:50.211557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.723 [2024-11-29 09:36:50.211631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.723 [2024-11-29 09:36:50.211642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:22.723 [2024-11-29 09:36:50.211651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.723 [2024-11-29 09:36:50.211667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.723 [2024-11-29 09:36:50.211699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.723 [2024-11-29 09:36:50.211714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:22.723 [2024-11-29 09:36:50.211728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.723 [2024-11-29 09:36:50.211758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.723 [2024-11-29 09:36:50.211836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.723 [2024-11-29 09:36:50.211849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:22.723 [2024-11-29 09:36:50.211858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.723 [2024-11-29 09:36:50.211871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.723 [2024-11-29 09:36:50.211914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.723 [2024-11-29 09:36:50.211925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:22.723 [2024-11-29 09:36:50.211934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.723 [2024-11-29 09:36:50.211943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.723 [2024-11-29 09:36:50.211987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.723 [2024-11-29 09:36:50.211996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:22.723 [2024-11-29 09:36:50.212005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.723 [2024-11-29 09:36:50.212012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.723 [2024-11-29 09:36:50.212067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.723 [2024-11-29 09:36:50.212078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:22.723 [2024-11-29 09:36:50.212086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.723 [2024-11-29 09:36:50.212095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.723 [2024-11-29 09:36:50.212253] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 70.210 ms, result 0 00:19:22.723 00:19:22.723 00:19:22.983 09:36:50 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=89745 00:19:22.983 09:36:50 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 89745 00:19:22.983 09:36:50 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 89745 ']' 00:19:22.983 09:36:50 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:22.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:22.983 09:36:50 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:22.983 09:36:50 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:22.983 09:36:50 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:22.983 09:36:50 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:22.983 09:36:50 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:22.983 [2024-11-29 09:36:50.550164] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:19:22.983 [2024-11-29 09:36:50.550952] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89745 ] 00:19:22.983 [2024-11-29 09:36:50.691362] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:23.243 [2024-11-29 09:36:50.721080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.243 [2024-11-29 09:36:50.749630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.815 09:36:51 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:23.815 09:36:51 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:23.815 09:36:51 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:24.075 [2024-11-29 09:36:51.607930] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:24.075 [2024-11-29 09:36:51.608008] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:24.075 [2024-11-29 09:36:51.784845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.075 [2024-11-29 09:36:51.784900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:24.075 [2024-11-29 09:36:51.784918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:24.075 [2024-11-29 09:36:51.784927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.075 [2024-11-29 09:36:51.787448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.075 [2024-11-29 09:36:51.787498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:24.075 [2024-11-29 09:36:51.787511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.499 ms 00:19:24.075 [2024-11-29 09:36:51.787519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.075 [2024-11-29 09:36:51.787641] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:24.075 [2024-11-29 09:36:51.788194] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:24.075 [2024-11-29 09:36:51.788261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.075 [2024-11-29 09:36:51.788271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:24.075 [2024-11-29 09:36:51.788284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.628 ms 00:19:24.075 [2024-11-29 09:36:51.788293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.075 [2024-11-29 09:36:51.790051] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:24.075 [2024-11-29 09:36:51.793906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.075 [2024-11-29 09:36:51.793959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:24.075 [2024-11-29 09:36:51.793971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.862 ms 00:19:24.075 [2024-11-29 09:36:51.793981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.075 [2024-11-29 09:36:51.794064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.075 [2024-11-29 09:36:51.794083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:24.075 [2024-11-29 09:36:51.794096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:19:24.076 [2024-11-29 09:36:51.794109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.338 [2024-11-29 09:36:51.802019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.338 [2024-11-29 09:36:51.802068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:24.338 [2024-11-29 09:36:51.802080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.859 ms 00:19:24.338 [2024-11-29 09:36:51.802092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.338 [2024-11-29 09:36:51.802208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.338 [2024-11-29 09:36:51.802223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:24.338 [2024-11-29 09:36:51.802235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:19:24.338 [2024-11-29 09:36:51.802245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.338 [2024-11-29 09:36:51.802277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.338 [2024-11-29 09:36:51.802288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:24.338 [2024-11-29 09:36:51.802297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:24.338 [2024-11-29 09:36:51.802307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.338 [2024-11-29 09:36:51.802335] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:24.338 [2024-11-29 09:36:51.804418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.338 [2024-11-29 09:36:51.804461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:24.338 [2024-11-29 09:36:51.804473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.083 ms 00:19:24.338 [2024-11-29 09:36:51.804481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.338 [2024-11-29 09:36:51.804522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.338 [2024-11-29 09:36:51.804531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:24.338 [2024-11-29 09:36:51.804542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:24.338 [2024-11-29 09:36:51.804550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.338 [2024-11-29 09:36:51.804573] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:24.338 [2024-11-29 09:36:51.804613] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:24.338 [2024-11-29 09:36:51.804662] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:24.338 [2024-11-29 09:36:51.804679] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:24.338 [2024-11-29 09:36:51.804801] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:24.338 [2024-11-29 09:36:51.804817] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:24.338 [2024-11-29 09:36:51.804836] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:24.338 [2024-11-29 09:36:51.804847] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:24.338 [2024-11-29 09:36:51.804861] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:24.338 [2024-11-29 09:36:51.804870] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:24.338 [2024-11-29 09:36:51.804881] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:24.338 [2024-11-29 09:36:51.804891] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:24.338 [2024-11-29 09:36:51.804900] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:24.338 [2024-11-29 09:36:51.804909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.338 [2024-11-29 09:36:51.804920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:24.338 [2024-11-29 09:36:51.804929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:19:24.338 [2024-11-29 09:36:51.804938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.338 [2024-11-29 09:36:51.805026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.338 [2024-11-29 09:36:51.805040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:24.338 [2024-11-29 09:36:51.805049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:24.338 [2024-11-29 09:36:51.805059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.338 [2024-11-29 09:36:51.805163] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:24.338 [2024-11-29 09:36:51.805176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:24.338 [2024-11-29 09:36:51.805185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:24.338 [2024-11-29 09:36:51.805198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:24.338 [2024-11-29 09:36:51.805208] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:24.338 [2024-11-29 09:36:51.805219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:24.338 [2024-11-29 09:36:51.805227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:24.338 [2024-11-29 09:36:51.805236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:24.338 [2024-11-29 09:36:51.805250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:24.338 [2024-11-29 09:36:51.805261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:24.338 [2024-11-29 09:36:51.805269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:24.338 [2024-11-29 09:36:51.805279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:24.338 [2024-11-29 09:36:51.805286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:24.338 [2024-11-29 09:36:51.805297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:24.338 [2024-11-29 09:36:51.805306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:24.338 [2024-11-29 09:36:51.805315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:24.338 [2024-11-29 09:36:51.805323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:24.338 [2024-11-29 09:36:51.805333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:24.338 [2024-11-29 09:36:51.805341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:24.338 [2024-11-29 09:36:51.805354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:24.338 [2024-11-29 09:36:51.805363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:24.338 [2024-11-29 09:36:51.805373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:24.339 [2024-11-29 09:36:51.805380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:24.339 [2024-11-29 09:36:51.805390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:24.339 [2024-11-29 09:36:51.805398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:24.339 [2024-11-29 09:36:51.805407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:24.339 [2024-11-29 09:36:51.805415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:24.339 [2024-11-29 09:36:51.805425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:24.339 [2024-11-29 09:36:51.805432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:24.339 [2024-11-29 09:36:51.805442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:24.339 [2024-11-29 09:36:51.805450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:24.339 [2024-11-29 09:36:51.805461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:24.339 [2024-11-29 09:36:51.805470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:24.339 [2024-11-29 09:36:51.805479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:24.339 [2024-11-29 09:36:51.805487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:24.339 [2024-11-29 09:36:51.805499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:24.339 [2024-11-29 09:36:51.805507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:24.339 [2024-11-29 09:36:51.805517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:24.339 [2024-11-29 09:36:51.805524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:24.339 [2024-11-29 09:36:51.805559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:24.339 [2024-11-29 09:36:51.805567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:24.339 [2024-11-29 09:36:51.805576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:24.339 [2024-11-29 09:36:51.805599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:24.339 [2024-11-29 09:36:51.805608] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:24.339 [2024-11-29 09:36:51.805617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:24.339 [2024-11-29 09:36:51.805630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:24.339 [2024-11-29 09:36:51.805637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:24.339 [2024-11-29 09:36:51.805648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:24.339 [2024-11-29 09:36:51.805654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:24.339 [2024-11-29 09:36:51.805663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:24.339 [2024-11-29 09:36:51.805670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:24.339 [2024-11-29 09:36:51.805681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:24.339 [2024-11-29 09:36:51.805691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:24.339 [2024-11-29 09:36:51.805703] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:24.339 [2024-11-29 09:36:51.805716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:24.339 [2024-11-29 09:36:51.805728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:24.339 [2024-11-29 09:36:51.805736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:24.339 [2024-11-29 09:36:51.805745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:24.339 [2024-11-29 09:36:51.805752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:24.339 [2024-11-29 09:36:51.805761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:24.339 [2024-11-29 09:36:51.805769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:24.339 [2024-11-29 09:36:51.805778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:24.339 [2024-11-29 09:36:51.805785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:24.339 [2024-11-29 09:36:51.805794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:24.339 [2024-11-29 09:36:51.805801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:24.339 [2024-11-29 09:36:51.805811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:24.339 [2024-11-29 09:36:51.805817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:24.339 [2024-11-29 09:36:51.805828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:24.339 [2024-11-29 09:36:51.805836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:24.339 [2024-11-29 09:36:51.805845] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:24.339 [2024-11-29 09:36:51.805853] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:24.339 [2024-11-29 09:36:51.805863] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:24.339 [2024-11-29 09:36:51.805870] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:24.339 [2024-11-29 09:36:51.805879] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:24.339 [2024-11-29 09:36:51.805886] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:24.339 [2024-11-29 09:36:51.805895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.339 [2024-11-29 09:36:51.805903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:24.339 [2024-11-29 09:36:51.805913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.799 ms 00:19:24.339 [2024-11-29 09:36:51.805919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.339 [2024-11-29 09:36:51.819555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.339 [2024-11-29 09:36:51.819637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:24.339 [2024-11-29 09:36:51.819652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.569 ms 00:19:24.339 [2024-11-29 09:36:51.819664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.339 [2024-11-29 09:36:51.819811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.339 [2024-11-29 09:36:51.819826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:24.339 [2024-11-29 09:36:51.819838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:19:24.339 [2024-11-29 09:36:51.819845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.339 [2024-11-29 09:36:51.832097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.339 [2024-11-29 09:36:51.832142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:24.339 [2024-11-29 09:36:51.832158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.226 ms 00:19:24.339 [2024-11-29 09:36:51.832166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.339 [2024-11-29 09:36:51.832233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.339 [2024-11-29 09:36:51.832243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:24.339 [2024-11-29 09:36:51.832254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:24.339 [2024-11-29 09:36:51.832262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.339 [2024-11-29 09:36:51.832793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.339 [2024-11-29 09:36:51.832821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:24.339 [2024-11-29 09:36:51.832833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.504 ms 00:19:24.339 [2024-11-29 09:36:51.832844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.339 [2024-11-29 09:36:51.832994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.339 [2024-11-29 09:36:51.833003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:24.339 [2024-11-29 09:36:51.833014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:19:24.339 [2024-11-29 09:36:51.833022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.339 [2024-11-29 09:36:51.841126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.339 [2024-11-29 09:36:51.841169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:24.339 [2024-11-29 09:36:51.841182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.076 ms 00:19:24.339 [2024-11-29 09:36:51.841190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.339 [2024-11-29 09:36:51.855117] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:24.339 [2024-11-29 09:36:51.855419] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:24.339 [2024-11-29 09:36:51.855461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.339 [2024-11-29 09:36:51.855479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:24.339 [2024-11-29 09:36:51.855501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.149 ms 00:19:24.339 [2024-11-29 09:36:51.855517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.339 [2024-11-29 09:36:51.875412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.339 [2024-11-29 09:36:51.875460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:24.339 [2024-11-29 09:36:51.875478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.772 ms 00:19:24.339 [2024-11-29 09:36:51.875486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.339 [2024-11-29 09:36:51.878365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.339 [2024-11-29 09:36:51.878545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:24.339 [2024-11-29 09:36:51.878567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.779 ms 00:19:24.339 [2024-11-29 09:36:51.878575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.339 [2024-11-29 09:36:51.881138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.339 [2024-11-29 09:36:51.881182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:24.340 [2024-11-29 09:36:51.881194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.496 ms 00:19:24.340 [2024-11-29 09:36:51.881201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.340 [2024-11-29 09:36:51.881569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.340 [2024-11-29 09:36:51.881583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:24.340 [2024-11-29 09:36:51.881747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:19:24.340 [2024-11-29 09:36:51.881767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.340 [2024-11-29 09:36:51.904070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.340 [2024-11-29 09:36:51.904266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:24.340 [2024-11-29 09:36:51.904335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.257 ms 00:19:24.340 [2024-11-29 09:36:51.904363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.340 [2024-11-29 09:36:51.912491] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:24.340 [2024-11-29 09:36:51.930881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.340 [2024-11-29 09:36:51.930934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:24.340 [2024-11-29 09:36:51.930948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.311 ms 00:19:24.340 [2024-11-29 09:36:51.930958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.340 [2024-11-29 09:36:51.931058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.340 [2024-11-29 09:36:51.931072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:24.340 [2024-11-29 09:36:51.931081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:24.340 [2024-11-29 09:36:51.931092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.340 [2024-11-29 09:36:51.931150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.340 [2024-11-29 09:36:51.931166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:24.340 [2024-11-29 09:36:51.931175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:19:24.340 [2024-11-29 09:36:51.931189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.340 [2024-11-29 09:36:51.931215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.340 [2024-11-29 09:36:51.931236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:24.340 [2024-11-29 09:36:51.931244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:24.340 [2024-11-29 09:36:51.931254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.340 [2024-11-29 09:36:51.931289] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:24.340 [2024-11-29 09:36:51.931301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.340 [2024-11-29 09:36:51.931309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:24.340 [2024-11-29 09:36:51.931319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:24.340 [2024-11-29 09:36:51.931327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.340 [2024-11-29 09:36:51.937611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.340 [2024-11-29 09:36:51.937662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:24.340 [2024-11-29 09:36:51.937681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.235 ms 00:19:24.340 [2024-11-29 09:36:51.937690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.340 [2024-11-29 09:36:51.937792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.340 [2024-11-29 09:36:51.937803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:24.340 [2024-11-29 09:36:51.937815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:19:24.340 [2024-11-29 09:36:51.937823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.340 [2024-11-29 09:36:51.938976] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:24.340 [2024-11-29 09:36:51.940285] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 153.788 ms, result 0 00:19:24.340 [2024-11-29 09:36:51.942322] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:24.340 Some configs were skipped because the RPC state that can call them passed over. 00:19:24.340 09:36:51 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:24.601 [2024-11-29 09:36:52.176078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.601 [2024-11-29 09:36:52.176288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:24.601 [2024-11-29 09:36:52.176359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.149 ms 00:19:24.601 [2024-11-29 09:36:52.176388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.601 [2024-11-29 09:36:52.176475] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.547 ms, result 0 00:19:24.601 true 00:19:24.601 09:36:52 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:24.861 [2024-11-29 09:36:52.392071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.861 [2024-11-29 09:36:52.392243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:24.861 [2024-11-29 09:36:52.392309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.881 ms 00:19:24.861 [2024-11-29 09:36:52.392334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.861 [2024-11-29 09:36:52.392393] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.206 ms, result 0 00:19:24.861 true 00:19:24.861 09:36:52 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 89745 00:19:24.861 09:36:52 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 89745 ']' 00:19:24.861 09:36:52 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 89745 00:19:24.861 09:36:52 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:24.861 09:36:52 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:24.861 09:36:52 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89745 00:19:24.861 killing process with pid 89745 00:19:24.861 09:36:52 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:24.861 09:36:52 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:24.861 09:36:52 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89745' 00:19:24.861 09:36:52 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 89745 00:19:24.861 09:36:52 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 89745 00:19:24.861 [2024-11-29 09:36:52.570032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.861 [2024-11-29 09:36:52.570099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:24.861 [2024-11-29 09:36:52.570113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:24.861 [2024-11-29 09:36:52.570125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.861 [2024-11-29 09:36:52.570151] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:24.861 [2024-11-29 09:36:52.570827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.861 [2024-11-29 09:36:52.570856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:24.861 [2024-11-29 09:36:52.570867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.658 ms 00:19:24.861 [2024-11-29 09:36:52.570875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.861 [2024-11-29 09:36:52.571175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.861 [2024-11-29 09:36:52.571199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:24.861 [2024-11-29 09:36:52.571210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:19:24.861 [2024-11-29 09:36:52.571219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.861 [2024-11-29 09:36:52.575602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.861 [2024-11-29 09:36:52.575637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:24.861 [2024-11-29 09:36:52.575652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.359 ms 00:19:24.861 [2024-11-29 09:36:52.575660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.861 [2024-11-29 09:36:52.582702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.861 [2024-11-29 09:36:52.582739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:24.861 [2024-11-29 09:36:52.582755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.998 ms 00:19:24.861 [2024-11-29 09:36:52.582762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.123 [2024-11-29 09:36:52.585454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.123 [2024-11-29 09:36:52.585653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:25.123 [2024-11-29 09:36:52.585677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.612 ms 00:19:25.123 [2024-11-29 09:36:52.585684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.123 [2024-11-29 09:36:52.590022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.123 [2024-11-29 09:36:52.590066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:25.123 [2024-11-29 09:36:52.590081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.290 ms 00:19:25.123 [2024-11-29 09:36:52.590090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.123 [2024-11-29 09:36:52.590228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.123 [2024-11-29 09:36:52.590238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:25.123 [2024-11-29 09:36:52.590249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:19:25.123 [2024-11-29 09:36:52.590261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.123 [2024-11-29 09:36:52.592742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.123 [2024-11-29 09:36:52.592882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:25.123 [2024-11-29 09:36:52.592906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.456 ms 00:19:25.123 [2024-11-29 09:36:52.592913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.123 [2024-11-29 09:36:52.595922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.123 [2024-11-29 09:36:52.596055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:25.123 [2024-11-29 09:36:52.596075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.946 ms 00:19:25.123 [2024-11-29 09:36:52.596082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.123 [2024-11-29 09:36:52.597878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.123 [2024-11-29 09:36:52.597919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:25.123 [2024-11-29 09:36:52.597931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.753 ms 00:19:25.123 [2024-11-29 09:36:52.597938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.123 [2024-11-29 09:36:52.599964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.123 [2024-11-29 09:36:52.600004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:25.123 [2024-11-29 09:36:52.600016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.949 ms 00:19:25.123 [2024-11-29 09:36:52.600023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.123 [2024-11-29 09:36:52.600065] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:25.124 [2024-11-29 09:36:52.600080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:25.124 [2024-11-29 09:36:52.600877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:25.125 [2024-11-29 09:36:52.600884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:25.125 [2024-11-29 09:36:52.600895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:25.125 [2024-11-29 09:36:52.600903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:25.125 [2024-11-29 09:36:52.600911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:25.125 [2024-11-29 09:36:52.600920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:25.125 [2024-11-29 09:36:52.600930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:25.125 [2024-11-29 09:36:52.600937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:25.125 [2024-11-29 09:36:52.600949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:25.125 [2024-11-29 09:36:52.600957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:25.125 [2024-11-29 09:36:52.600966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:25.125 [2024-11-29 09:36:52.600982] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:25.125 [2024-11-29 09:36:52.600994] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f68941d2-65a9-48fc-bf3a-b4b6b1215387 00:19:25.125 [2024-11-29 09:36:52.601003] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:25.125 [2024-11-29 09:36:52.601012] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:25.125 [2024-11-29 09:36:52.601019] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:25.125 [2024-11-29 09:36:52.601029] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:25.125 [2024-11-29 09:36:52.601038] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:25.125 [2024-11-29 09:36:52.601047] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:25.125 [2024-11-29 09:36:52.601055] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:25.125 [2024-11-29 09:36:52.601063] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:25.125 [2024-11-29 09:36:52.601070] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:25.125 [2024-11-29 09:36:52.601079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.125 [2024-11-29 09:36:52.601086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:25.125 [2024-11-29 09:36:52.601100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.015 ms 00:19:25.125 [2024-11-29 09:36:52.601107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.125 [2024-11-29 09:36:52.603151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.125 [2024-11-29 09:36:52.603285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:25.125 [2024-11-29 09:36:52.603306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.017 ms 00:19:25.125 [2024-11-29 09:36:52.603315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.125 [2024-11-29 09:36:52.603457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.125 [2024-11-29 09:36:52.603467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:25.125 [2024-11-29 09:36:52.603478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:19:25.125 [2024-11-29 09:36:52.603488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.125 [2024-11-29 09:36:52.610989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.125 [2024-11-29 09:36:52.611038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:25.125 [2024-11-29 09:36:52.611051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.125 [2024-11-29 09:36:52.611059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.125 [2024-11-29 09:36:52.611141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.125 [2024-11-29 09:36:52.611149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:25.125 [2024-11-29 09:36:52.611163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.125 [2024-11-29 09:36:52.611178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.125 [2024-11-29 09:36:52.611232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.125 [2024-11-29 09:36:52.611241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:25.125 [2024-11-29 09:36:52.611252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.125 [2024-11-29 09:36:52.611260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.125 [2024-11-29 09:36:52.611280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.125 [2024-11-29 09:36:52.611289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:25.125 [2024-11-29 09:36:52.611299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.125 [2024-11-29 09:36:52.611306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.125 [2024-11-29 09:36:52.626125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.125 [2024-11-29 09:36:52.626371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:25.125 [2024-11-29 09:36:52.626401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.125 [2024-11-29 09:36:52.626410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.125 [2024-11-29 09:36:52.638183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.125 [2024-11-29 09:36:52.638235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:25.125 [2024-11-29 09:36:52.638252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.125 [2024-11-29 09:36:52.638265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.125 [2024-11-29 09:36:52.638336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.125 [2024-11-29 09:36:52.638347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:25.125 [2024-11-29 09:36:52.638358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.125 [2024-11-29 09:36:52.638366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.125 [2024-11-29 09:36:52.638403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.125 [2024-11-29 09:36:52.638411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:25.125 [2024-11-29 09:36:52.638422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.125 [2024-11-29 09:36:52.638429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.125 [2024-11-29 09:36:52.638515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.125 [2024-11-29 09:36:52.638527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:25.125 [2024-11-29 09:36:52.638538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.125 [2024-11-29 09:36:52.638546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.125 [2024-11-29 09:36:52.638626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.125 [2024-11-29 09:36:52.638637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:25.125 [2024-11-29 09:36:52.638655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.125 [2024-11-29 09:36:52.638663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.125 [2024-11-29 09:36:52.638711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.125 [2024-11-29 09:36:52.638726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:25.125 [2024-11-29 09:36:52.638737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.125 [2024-11-29 09:36:52.638745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.125 [2024-11-29 09:36:52.638795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.125 [2024-11-29 09:36:52.638806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:25.125 [2024-11-29 09:36:52.638817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.125 [2024-11-29 09:36:52.638828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.125 [2024-11-29 09:36:52.638984] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 68.918 ms, result 0 00:19:25.386 09:36:52 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:25.386 [2024-11-29 09:36:52.944848] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:19:25.386 [2024-11-29 09:36:52.945000] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89787 ] 00:19:25.386 [2024-11-29 09:36:53.080845] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:25.648 [2024-11-29 09:36:53.114529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.648 [2024-11-29 09:36:53.142961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.648 [2024-11-29 09:36:53.258397] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:25.648 [2024-11-29 09:36:53.258765] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:25.911 [2024-11-29 09:36:53.419383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.911 [2024-11-29 09:36:53.419440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:25.911 [2024-11-29 09:36:53.419456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:25.911 [2024-11-29 09:36:53.419465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.911 [2024-11-29 09:36:53.422065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.911 [2024-11-29 09:36:53.422115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:25.911 [2024-11-29 09:36:53.422126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.578 ms 00:19:25.911 [2024-11-29 09:36:53.422134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.911 [2024-11-29 09:36:53.422238] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:25.911 [2024-11-29 09:36:53.422498] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:25.911 [2024-11-29 09:36:53.422520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.911 [2024-11-29 09:36:53.422529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:25.911 [2024-11-29 09:36:53.422539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:19:25.911 [2024-11-29 09:36:53.422547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.911 [2024-11-29 09:36:53.424240] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:25.911 [2024-11-29 09:36:53.428502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.911 [2024-11-29 09:36:53.428559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:25.911 [2024-11-29 09:36:53.428572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.263 ms 00:19:25.911 [2024-11-29 09:36:53.428582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.911 [2024-11-29 09:36:53.428703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.911 [2024-11-29 09:36:53.428716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:25.911 [2024-11-29 09:36:53.428726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:19:25.912 [2024-11-29 09:36:53.428734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.912 [2024-11-29 09:36:53.436764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.912 [2024-11-29 09:36:53.436806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:25.912 [2024-11-29 09:36:53.436818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.981 ms 00:19:25.912 [2024-11-29 09:36:53.436831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.912 [2024-11-29 09:36:53.436977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.912 [2024-11-29 09:36:53.436992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:25.912 [2024-11-29 09:36:53.437002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:19:25.912 [2024-11-29 09:36:53.437018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.912 [2024-11-29 09:36:53.437049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.912 [2024-11-29 09:36:53.437058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:25.912 [2024-11-29 09:36:53.437065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:25.912 [2024-11-29 09:36:53.437073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.912 [2024-11-29 09:36:53.437096] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:25.912 [2024-11-29 09:36:53.439165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.912 [2024-11-29 09:36:53.439332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:25.912 [2024-11-29 09:36:53.439355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.075 ms 00:19:25.912 [2024-11-29 09:36:53.439371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.912 [2024-11-29 09:36:53.439417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.912 [2024-11-29 09:36:53.439426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:25.912 [2024-11-29 09:36:53.439435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:25.912 [2024-11-29 09:36:53.439446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.912 [2024-11-29 09:36:53.439469] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:25.912 [2024-11-29 09:36:53.439492] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:25.912 [2024-11-29 09:36:53.439530] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:25.912 [2024-11-29 09:36:53.439555] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:25.912 [2024-11-29 09:36:53.439684] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:25.912 [2024-11-29 09:36:53.439697] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:25.912 [2024-11-29 09:36:53.439707] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:25.912 [2024-11-29 09:36:53.439719] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:25.912 [2024-11-29 09:36:53.439729] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:25.912 [2024-11-29 09:36:53.439737] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:25.912 [2024-11-29 09:36:53.439745] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:25.912 [2024-11-29 09:36:53.439755] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:25.912 [2024-11-29 09:36:53.439765] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:25.912 [2024-11-29 09:36:53.439774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.912 [2024-11-29 09:36:53.439781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:25.912 [2024-11-29 09:36:53.439789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:19:25.912 [2024-11-29 09:36:53.439797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.912 [2024-11-29 09:36:53.439885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.912 [2024-11-29 09:36:53.439895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:25.912 [2024-11-29 09:36:53.439903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:25.912 [2024-11-29 09:36:53.439912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.912 [2024-11-29 09:36:53.440015] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:25.912 [2024-11-29 09:36:53.440030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:25.912 [2024-11-29 09:36:53.440040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:25.912 [2024-11-29 09:36:53.440049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:25.912 [2024-11-29 09:36:53.440066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:25.912 [2024-11-29 09:36:53.440074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:25.912 [2024-11-29 09:36:53.440085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:25.912 [2024-11-29 09:36:53.440092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:25.912 [2024-11-29 09:36:53.440103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:25.912 [2024-11-29 09:36:53.440111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:25.912 [2024-11-29 09:36:53.440118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:25.912 [2024-11-29 09:36:53.440126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:25.912 [2024-11-29 09:36:53.440134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:25.912 [2024-11-29 09:36:53.440142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:25.912 [2024-11-29 09:36:53.440150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:25.912 [2024-11-29 09:36:53.440157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:25.912 [2024-11-29 09:36:53.440165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:25.912 [2024-11-29 09:36:53.440173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:25.912 [2024-11-29 09:36:53.440182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:25.912 [2024-11-29 09:36:53.440190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:25.912 [2024-11-29 09:36:53.440201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:25.912 [2024-11-29 09:36:53.440209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:25.912 [2024-11-29 09:36:53.440222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:25.912 [2024-11-29 09:36:53.440230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:25.912 [2024-11-29 09:36:53.440238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:25.912 [2024-11-29 09:36:53.440246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:25.912 [2024-11-29 09:36:53.440254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:25.912 [2024-11-29 09:36:53.440262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:25.912 [2024-11-29 09:36:53.440270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:25.912 [2024-11-29 09:36:53.440278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:25.912 [2024-11-29 09:36:53.440286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:25.912 [2024-11-29 09:36:53.440292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:25.912 [2024-11-29 09:36:53.440299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:25.912 [2024-11-29 09:36:53.440305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:25.912 [2024-11-29 09:36:53.440312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:25.912 [2024-11-29 09:36:53.440319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:25.912 [2024-11-29 09:36:53.440326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:25.912 [2024-11-29 09:36:53.440332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:25.912 [2024-11-29 09:36:53.440340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:25.912 [2024-11-29 09:36:53.440347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:25.912 [2024-11-29 09:36:53.440354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:25.912 [2024-11-29 09:36:53.440361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:25.912 [2024-11-29 09:36:53.440367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:25.912 [2024-11-29 09:36:53.440374] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:25.912 [2024-11-29 09:36:53.440382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:25.912 [2024-11-29 09:36:53.440389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:25.912 [2024-11-29 09:36:53.440396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:25.912 [2024-11-29 09:36:53.440404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:25.912 [2024-11-29 09:36:53.440410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:25.912 [2024-11-29 09:36:53.440418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:25.912 [2024-11-29 09:36:53.440424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:25.913 [2024-11-29 09:36:53.440430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:25.913 [2024-11-29 09:36:53.440438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:25.913 [2024-11-29 09:36:53.440447] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:25.913 [2024-11-29 09:36:53.440463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:25.913 [2024-11-29 09:36:53.440474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:25.913 [2024-11-29 09:36:53.440482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:25.913 [2024-11-29 09:36:53.440490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:25.913 [2024-11-29 09:36:53.440496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:25.913 [2024-11-29 09:36:53.440504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:25.913 [2024-11-29 09:36:53.440512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:25.913 [2024-11-29 09:36:53.440518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:25.913 [2024-11-29 09:36:53.440525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:25.913 [2024-11-29 09:36:53.440533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:25.913 [2024-11-29 09:36:53.440540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:25.913 [2024-11-29 09:36:53.440547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:25.913 [2024-11-29 09:36:53.440554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:25.913 [2024-11-29 09:36:53.440562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:25.913 [2024-11-29 09:36:53.440569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:25.913 [2024-11-29 09:36:53.440576] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:25.913 [2024-11-29 09:36:53.440601] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:25.913 [2024-11-29 09:36:53.440610] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:25.913 [2024-11-29 09:36:53.440618] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:25.913 [2024-11-29 09:36:53.440625] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:25.913 [2024-11-29 09:36:53.440632] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:25.913 [2024-11-29 09:36:53.440640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.913 [2024-11-29 09:36:53.440648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:25.913 [2024-11-29 09:36:53.440667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.694 ms 00:19:25.913 [2024-11-29 09:36:53.440675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.913 [2024-11-29 09:36:53.455116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.913 [2024-11-29 09:36:53.455160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:25.913 [2024-11-29 09:36:53.455172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.386 ms 00:19:25.913 [2024-11-29 09:36:53.455189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.913 [2024-11-29 09:36:53.455327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.913 [2024-11-29 09:36:53.455338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:25.913 [2024-11-29 09:36:53.455347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:25.913 [2024-11-29 09:36:53.455354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.913 [2024-11-29 09:36:53.477827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.913 [2024-11-29 09:36:53.477892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:25.913 [2024-11-29 09:36:53.477910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.447 ms 00:19:25.913 [2024-11-29 09:36:53.477931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.913 [2024-11-29 09:36:53.478051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.913 [2024-11-29 09:36:53.478069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:25.913 [2024-11-29 09:36:53.478084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:25.913 [2024-11-29 09:36:53.478097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.913 [2024-11-29 09:36:53.478669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.913 [2024-11-29 09:36:53.478702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:25.913 [2024-11-29 09:36:53.478718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.537 ms 00:19:25.913 [2024-11-29 09:36:53.478732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.913 [2024-11-29 09:36:53.478946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.913 [2024-11-29 09:36:53.478986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:25.913 [2024-11-29 09:36:53.479000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.167 ms 00:19:25.913 [2024-11-29 09:36:53.479011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.913 [2024-11-29 09:36:53.487732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.913 [2024-11-29 09:36:53.487778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:25.913 [2024-11-29 09:36:53.487788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.682 ms 00:19:25.913 [2024-11-29 09:36:53.487796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.913 [2024-11-29 09:36:53.491680] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:25.913 [2024-11-29 09:36:53.491724] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:25.913 [2024-11-29 09:36:53.491736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.913 [2024-11-29 09:36:53.491744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:25.913 [2024-11-29 09:36:53.491753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.840 ms 00:19:25.913 [2024-11-29 09:36:53.491761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.913 [2024-11-29 09:36:53.507301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.913 [2024-11-29 09:36:53.507345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:25.913 [2024-11-29 09:36:53.507357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.477 ms 00:19:25.913 [2024-11-29 09:36:53.507365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.913 [2024-11-29 09:36:53.510238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.913 [2024-11-29 09:36:53.510283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:25.913 [2024-11-29 09:36:53.510294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.773 ms 00:19:25.913 [2024-11-29 09:36:53.510301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.913 [2024-11-29 09:36:53.512752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.913 [2024-11-29 09:36:53.512922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:25.913 [2024-11-29 09:36:53.512939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.399 ms 00:19:25.913 [2024-11-29 09:36:53.512947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.913 [2024-11-29 09:36:53.513287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.913 [2024-11-29 09:36:53.513301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:25.913 [2024-11-29 09:36:53.513311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:19:25.913 [2024-11-29 09:36:53.513319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.913 [2024-11-29 09:36:53.535969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.913 [2024-11-29 09:36:53.536183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:25.913 [2024-11-29 09:36:53.536205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.624 ms 00:19:25.913 [2024-11-29 09:36:53.536214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.913 [2024-11-29 09:36:53.544498] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:25.913 [2024-11-29 09:36:53.563642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.913 [2024-11-29 09:36:53.563693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:25.913 [2024-11-29 09:36:53.563706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.329 ms 00:19:25.913 [2024-11-29 09:36:53.563715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.913 [2024-11-29 09:36:53.563824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.913 [2024-11-29 09:36:53.563844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:25.913 [2024-11-29 09:36:53.563855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:19:25.913 [2024-11-29 09:36:53.563864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.913 [2024-11-29 09:36:53.563920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.913 [2024-11-29 09:36:53.563933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:25.913 [2024-11-29 09:36:53.563942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:19:25.913 [2024-11-29 09:36:53.563954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.913 [2024-11-29 09:36:53.563985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.913 [2024-11-29 09:36:53.563994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:25.913 [2024-11-29 09:36:53.564005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:25.913 [2024-11-29 09:36:53.564014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.913 [2024-11-29 09:36:53.564052] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:25.913 [2024-11-29 09:36:53.564063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.913 [2024-11-29 09:36:53.564072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:25.913 [2024-11-29 09:36:53.564084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:25.914 [2024-11-29 09:36:53.564092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.914 [2024-11-29 09:36:53.570091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.914 [2024-11-29 09:36:53.570144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:25.914 [2024-11-29 09:36:53.570157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.977 ms 00:19:25.914 [2024-11-29 09:36:53.570177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.914 [2024-11-29 09:36:53.570274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.914 [2024-11-29 09:36:53.570286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:25.914 [2024-11-29 09:36:53.570295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:19:25.914 [2024-11-29 09:36:53.570305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.914 [2024-11-29 09:36:53.571342] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:25.914 [2024-11-29 09:36:53.572693] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 151.649 ms, result 0 00:19:25.914 [2024-11-29 09:36:53.573690] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:25.914 [2024-11-29 09:36:53.581294] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:27.299  [2024-11-29T09:36:55.971Z] Copying: 15/256 [MB] (15 MBps) [2024-11-29T09:36:56.914Z] Copying: 26/256 [MB] (11 MBps) [2024-11-29T09:36:57.859Z] Copying: 50/256 [MB] (23 MBps) [2024-11-29T09:36:58.886Z] Copying: 64/256 [MB] (13 MBps) [2024-11-29T09:36:59.829Z] Copying: 74/256 [MB] (10 MBps) [2024-11-29T09:37:00.771Z] Copying: 99/256 [MB] (25 MBps) [2024-11-29T09:37:01.712Z] Copying: 122/256 [MB] (22 MBps) [2024-11-29T09:37:02.654Z] Copying: 144/256 [MB] (22 MBps) [2024-11-29T09:37:04.039Z] Copying: 160/256 [MB] (15 MBps) [2024-11-29T09:37:04.982Z] Copying: 171/256 [MB] (11 MBps) [2024-11-29T09:37:05.922Z] Copying: 189/256 [MB] (17 MBps) [2024-11-29T09:37:06.867Z] Copying: 216/256 [MB] (27 MBps) [2024-11-29T09:37:07.812Z] Copying: 227/256 [MB] (10 MBps) [2024-11-29T09:37:08.757Z] Copying: 242/256 [MB] (15 MBps) [2024-11-29T09:37:09.019Z] Copying: 253/256 [MB] (10 MBps) [2024-11-29T09:37:09.591Z] Copying: 256/256 [MB] (average 16 MBps)[2024-11-29 09:37:09.332859] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:41.865 [2024-11-29 09:37:09.335877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.865 [2024-11-29 09:37:09.336057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:41.865 [2024-11-29 09:37:09.336080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:41.865 [2024-11-29 09:37:09.336090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.865 [2024-11-29 09:37:09.336122] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:41.865 [2024-11-29 09:37:09.336824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.865 [2024-11-29 09:37:09.336861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:41.865 [2024-11-29 09:37:09.336874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.686 ms 00:19:41.865 [2024-11-29 09:37:09.336884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.865 [2024-11-29 09:37:09.337174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.865 [2024-11-29 09:37:09.337195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:41.865 [2024-11-29 09:37:09.337205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:19:41.865 [2024-11-29 09:37:09.337214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.865 [2024-11-29 09:37:09.340934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.865 [2024-11-29 09:37:09.340954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:41.865 [2024-11-29 09:37:09.340965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.701 ms 00:19:41.865 [2024-11-29 09:37:09.340973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.865 [2024-11-29 09:37:09.348019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.865 [2024-11-29 09:37:09.348059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:41.865 [2024-11-29 09:37:09.348077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.019 ms 00:19:41.865 [2024-11-29 09:37:09.348085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.865 [2024-11-29 09:37:09.351145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.865 [2024-11-29 09:37:09.351312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:41.865 [2024-11-29 09:37:09.351331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.977 ms 00:19:41.865 [2024-11-29 09:37:09.351339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.865 [2024-11-29 09:37:09.356722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.865 [2024-11-29 09:37:09.356786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:41.865 [2024-11-29 09:37:09.356797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.340 ms 00:19:41.865 [2024-11-29 09:37:09.356805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.865 [2024-11-29 09:37:09.356949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.865 [2024-11-29 09:37:09.356967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:41.865 [2024-11-29 09:37:09.356980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:19:41.866 [2024-11-29 09:37:09.356988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.866 [2024-11-29 09:37:09.360345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.866 [2024-11-29 09:37:09.360393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:41.866 [2024-11-29 09:37:09.360403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.337 ms 00:19:41.866 [2024-11-29 09:37:09.360412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.866 [2024-11-29 09:37:09.363294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.866 [2024-11-29 09:37:09.363458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:41.866 [2024-11-29 09:37:09.363476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.835 ms 00:19:41.866 [2024-11-29 09:37:09.363484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.866 [2024-11-29 09:37:09.365870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.866 [2024-11-29 09:37:09.365917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:41.866 [2024-11-29 09:37:09.365927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.347 ms 00:19:41.866 [2024-11-29 09:37:09.365935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.866 [2024-11-29 09:37:09.368054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.866 [2024-11-29 09:37:09.368101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:41.866 [2024-11-29 09:37:09.368112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.041 ms 00:19:41.866 [2024-11-29 09:37:09.368119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.866 [2024-11-29 09:37:09.368178] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:41.866 [2024-11-29 09:37:09.368195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:41.866 [2024-11-29 09:37:09.368811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:41.867 [2024-11-29 09:37:09.368818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:41.867 [2024-11-29 09:37:09.368826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:41.867 [2024-11-29 09:37:09.368834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:41.867 [2024-11-29 09:37:09.368842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:41.867 [2024-11-29 09:37:09.368850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:41.867 [2024-11-29 09:37:09.368858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:41.867 [2024-11-29 09:37:09.368866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:41.867 [2024-11-29 09:37:09.368874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:41.867 [2024-11-29 09:37:09.368882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:41.867 [2024-11-29 09:37:09.368889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:41.867 [2024-11-29 09:37:09.368898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:41.867 [2024-11-29 09:37:09.368905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:41.867 [2024-11-29 09:37:09.368913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:41.867 [2024-11-29 09:37:09.368920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:41.867 [2024-11-29 09:37:09.368941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:41.867 [2024-11-29 09:37:09.368950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:41.867 [2024-11-29 09:37:09.368958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:41.867 [2024-11-29 09:37:09.368966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:41.867 [2024-11-29 09:37:09.368974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:41.867 [2024-11-29 09:37:09.368982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:41.867 [2024-11-29 09:37:09.368990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:41.867 [2024-11-29 09:37:09.368998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:41.867 [2024-11-29 09:37:09.369007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:41.867 [2024-11-29 09:37:09.369014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:41.867 [2024-11-29 09:37:09.369022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:41.867 [2024-11-29 09:37:09.369040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:41.867 [2024-11-29 09:37:09.369057] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:41.867 [2024-11-29 09:37:09.369065] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f68941d2-65a9-48fc-bf3a-b4b6b1215387 00:19:41.867 [2024-11-29 09:37:09.369074] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:41.867 [2024-11-29 09:37:09.369082] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:41.867 [2024-11-29 09:37:09.369090] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:41.867 [2024-11-29 09:37:09.369099] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:41.867 [2024-11-29 09:37:09.369107] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:41.867 [2024-11-29 09:37:09.369119] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:41.867 [2024-11-29 09:37:09.369127] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:41.867 [2024-11-29 09:37:09.369134] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:41.867 [2024-11-29 09:37:09.369141] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:41.867 [2024-11-29 09:37:09.369148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.867 [2024-11-29 09:37:09.369161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:41.867 [2024-11-29 09:37:09.369171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.972 ms 00:19:41.867 [2024-11-29 09:37:09.369179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.867 [2024-11-29 09:37:09.371540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.867 [2024-11-29 09:37:09.371573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:41.867 [2024-11-29 09:37:09.371612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.338 ms 00:19:41.867 [2024-11-29 09:37:09.371626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.867 [2024-11-29 09:37:09.371746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.867 [2024-11-29 09:37:09.371755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:41.867 [2024-11-29 09:37:09.371765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:19:41.867 [2024-11-29 09:37:09.371773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.867 [2024-11-29 09:37:09.379705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.867 [2024-11-29 09:37:09.379749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:41.867 [2024-11-29 09:37:09.379766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.867 [2024-11-29 09:37:09.379779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.867 [2024-11-29 09:37:09.379862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.867 [2024-11-29 09:37:09.379872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:41.867 [2024-11-29 09:37:09.379881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.867 [2024-11-29 09:37:09.379889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.867 [2024-11-29 09:37:09.379940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.867 [2024-11-29 09:37:09.379951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:41.867 [2024-11-29 09:37:09.379959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.867 [2024-11-29 09:37:09.379970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.867 [2024-11-29 09:37:09.379988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.867 [2024-11-29 09:37:09.379997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:41.867 [2024-11-29 09:37:09.380005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.867 [2024-11-29 09:37:09.380013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.867 [2024-11-29 09:37:09.394127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.867 [2024-11-29 09:37:09.394181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:41.867 [2024-11-29 09:37:09.394200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.867 [2024-11-29 09:37:09.394208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.867 [2024-11-29 09:37:09.405601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.867 [2024-11-29 09:37:09.405649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:41.867 [2024-11-29 09:37:09.405661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.867 [2024-11-29 09:37:09.405669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.867 [2024-11-29 09:37:09.405720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.867 [2024-11-29 09:37:09.405731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:41.867 [2024-11-29 09:37:09.405749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.867 [2024-11-29 09:37:09.405757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.867 [2024-11-29 09:37:09.405793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.867 [2024-11-29 09:37:09.405802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:41.867 [2024-11-29 09:37:09.405811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.867 [2024-11-29 09:37:09.405820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.867 [2024-11-29 09:37:09.405895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.867 [2024-11-29 09:37:09.405905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:41.867 [2024-11-29 09:37:09.405914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.867 [2024-11-29 09:37:09.405922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.867 [2024-11-29 09:37:09.405955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.867 [2024-11-29 09:37:09.405967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:41.867 [2024-11-29 09:37:09.405975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.867 [2024-11-29 09:37:09.405988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.867 [2024-11-29 09:37:09.406037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.867 [2024-11-29 09:37:09.406047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:41.867 [2024-11-29 09:37:09.406056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.867 [2024-11-29 09:37:09.406064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.867 [2024-11-29 09:37:09.406118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.867 [2024-11-29 09:37:09.406129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:41.867 [2024-11-29 09:37:09.406137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.867 [2024-11-29 09:37:09.406145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.867 [2024-11-29 09:37:09.406303] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 70.401 ms, result 0 00:19:42.129 00:19:42.129 00:19:42.129 09:37:09 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:42.697 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:19:42.697 09:37:10 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:19:42.697 09:37:10 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:19:42.697 09:37:10 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:42.697 09:37:10 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:42.697 09:37:10 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:19:42.697 09:37:10 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:42.697 Process with pid 89745 is not found 00:19:42.697 09:37:10 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 89745 00:19:42.697 09:37:10 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 89745 ']' 00:19:42.697 09:37:10 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 89745 00:19:42.697 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (89745) - No such process 00:19:42.697 09:37:10 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 89745 is not found' 00:19:42.697 ************************************ 00:19:42.697 END TEST ftl_trim 00:19:42.697 ************************************ 00:19:42.697 00:19:42.697 real 1m11.990s 00:19:42.697 user 1m34.398s 00:19:42.697 sys 0m5.678s 00:19:42.697 09:37:10 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:42.697 09:37:10 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:42.697 09:37:10 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:19:42.697 09:37:10 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:42.697 09:37:10 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:42.697 09:37:10 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:42.697 ************************************ 00:19:42.697 START TEST ftl_restore 00:19:42.697 ************************************ 00:19:42.697 09:37:10 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:19:42.958 * Looking for test storage... 00:19:42.958 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:42.958 09:37:10 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:42.958 09:37:10 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:19:42.958 09:37:10 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:42.958 09:37:10 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:42.958 09:37:10 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:42.958 09:37:10 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:42.958 09:37:10 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:42.958 09:37:10 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:19:42.958 09:37:10 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:19:42.958 09:37:10 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:19:42.958 09:37:10 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:19:42.958 09:37:10 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:19:42.958 09:37:10 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:19:42.958 09:37:10 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:19:42.958 09:37:10 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:42.958 09:37:10 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:19:42.958 09:37:10 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:19:42.958 09:37:10 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:42.958 09:37:10 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:42.958 09:37:10 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:19:42.958 09:37:10 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:19:42.958 09:37:10 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:42.958 09:37:10 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:19:42.958 09:37:10 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:19:42.958 09:37:10 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:19:42.958 09:37:10 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:19:42.958 09:37:10 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:42.958 09:37:10 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:19:42.958 09:37:10 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:19:42.958 09:37:10 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:42.958 09:37:10 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:42.958 09:37:10 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:19:42.958 09:37:10 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:42.958 09:37:10 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:42.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:42.958 --rc genhtml_branch_coverage=1 00:19:42.958 --rc genhtml_function_coverage=1 00:19:42.958 --rc genhtml_legend=1 00:19:42.958 --rc geninfo_all_blocks=1 00:19:42.958 --rc geninfo_unexecuted_blocks=1 00:19:42.958 00:19:42.958 ' 00:19:42.958 09:37:10 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:42.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:42.958 --rc genhtml_branch_coverage=1 00:19:42.958 --rc genhtml_function_coverage=1 00:19:42.958 --rc genhtml_legend=1 00:19:42.958 --rc geninfo_all_blocks=1 00:19:42.958 --rc geninfo_unexecuted_blocks=1 00:19:42.958 00:19:42.958 ' 00:19:42.958 09:37:10 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:42.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:42.958 --rc genhtml_branch_coverage=1 00:19:42.958 --rc genhtml_function_coverage=1 00:19:42.958 --rc genhtml_legend=1 00:19:42.958 --rc geninfo_all_blocks=1 00:19:42.958 --rc geninfo_unexecuted_blocks=1 00:19:42.958 00:19:42.958 ' 00:19:42.958 09:37:10 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:42.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:42.958 --rc genhtml_branch_coverage=1 00:19:42.958 --rc genhtml_function_coverage=1 00:19:42.958 --rc genhtml_legend=1 00:19:42.958 --rc geninfo_all_blocks=1 00:19:42.958 --rc geninfo_unexecuted_blocks=1 00:19:42.958 00:19:42.958 ' 00:19:42.958 09:37:10 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:42.958 09:37:10 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:19:42.958 09:37:10 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:42.958 09:37:10 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:42.958 09:37:10 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:42.958 09:37:10 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:42.958 09:37:10 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:42.958 09:37:10 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:42.958 09:37:10 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:42.958 09:37:10 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:42.958 09:37:10 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:42.958 09:37:10 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:42.958 09:37:10 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:42.958 09:37:10 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:42.958 09:37:10 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:42.958 09:37:10 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:42.958 09:37:10 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:42.958 09:37:10 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:42.958 09:37:10 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:42.958 09:37:10 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:42.958 09:37:10 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:42.958 09:37:10 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:42.958 09:37:10 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:42.958 09:37:10 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:42.958 09:37:10 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:42.958 09:37:10 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:42.958 09:37:10 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:42.958 09:37:10 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:42.958 09:37:10 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:42.958 09:37:10 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:42.958 09:37:10 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:19:42.958 09:37:10 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.JTtKODjOTW 00:19:42.958 09:37:10 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:19:42.958 09:37:10 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:19:42.958 09:37:10 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:19:42.958 09:37:10 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:19:42.958 09:37:10 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:19:42.958 09:37:10 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:19:42.958 09:37:10 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:19:42.958 09:37:10 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:19:42.958 09:37:10 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=90034 00:19:42.959 09:37:10 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 90034 00:19:42.959 09:37:10 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 90034 ']' 00:19:42.959 09:37:10 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:42.959 09:37:10 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:42.959 09:37:10 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:42.959 09:37:10 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:42.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:42.959 09:37:10 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:42.959 09:37:10 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:19:42.959 [2024-11-29 09:37:10.628717] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:19:42.959 [2024-11-29 09:37:10.629117] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90034 ] 00:19:43.220 [2024-11-29 09:37:10.766087] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:43.220 [2024-11-29 09:37:10.796943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.220 [2024-11-29 09:37:10.825845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:43.791 09:37:11 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:43.791 09:37:11 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:19:43.791 09:37:11 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:43.791 09:37:11 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:19:43.791 09:37:11 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:43.791 09:37:11 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:19:43.791 09:37:11 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:19:43.791 09:37:11 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:44.365 09:37:11 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:44.365 09:37:11 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:19:44.365 09:37:11 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:44.365 09:37:11 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:44.365 09:37:11 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:44.365 09:37:11 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:19:44.365 09:37:11 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:19:44.365 09:37:11 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:44.365 09:37:12 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:44.365 { 00:19:44.365 "name": "nvme0n1", 00:19:44.365 "aliases": [ 00:19:44.365 "c8b59a1f-e0d3-4fde-967a-3a156464050e" 00:19:44.365 ], 00:19:44.365 "product_name": "NVMe disk", 00:19:44.365 "block_size": 4096, 00:19:44.365 "num_blocks": 1310720, 00:19:44.365 "uuid": "c8b59a1f-e0d3-4fde-967a-3a156464050e", 00:19:44.365 "numa_id": -1, 00:19:44.365 "assigned_rate_limits": { 00:19:44.365 "rw_ios_per_sec": 0, 00:19:44.365 "rw_mbytes_per_sec": 0, 00:19:44.365 "r_mbytes_per_sec": 0, 00:19:44.365 "w_mbytes_per_sec": 0 00:19:44.365 }, 00:19:44.365 "claimed": true, 00:19:44.365 "claim_type": "read_many_write_one", 00:19:44.365 "zoned": false, 00:19:44.365 "supported_io_types": { 00:19:44.365 "read": true, 00:19:44.365 "write": true, 00:19:44.365 "unmap": true, 00:19:44.365 "flush": true, 00:19:44.365 "reset": true, 00:19:44.365 "nvme_admin": true, 00:19:44.365 "nvme_io": true, 00:19:44.365 "nvme_io_md": false, 00:19:44.365 "write_zeroes": true, 00:19:44.365 "zcopy": false, 00:19:44.365 "get_zone_info": false, 00:19:44.365 "zone_management": false, 00:19:44.365 "zone_append": false, 00:19:44.365 "compare": true, 00:19:44.365 "compare_and_write": false, 00:19:44.365 "abort": true, 00:19:44.365 "seek_hole": false, 00:19:44.365 "seek_data": false, 00:19:44.365 "copy": true, 00:19:44.365 "nvme_iov_md": false 00:19:44.365 }, 00:19:44.365 "driver_specific": { 00:19:44.365 "nvme": [ 00:19:44.365 { 00:19:44.365 "pci_address": "0000:00:11.0", 00:19:44.365 "trid": { 00:19:44.365 "trtype": "PCIe", 00:19:44.365 "traddr": "0000:00:11.0" 00:19:44.365 }, 00:19:44.365 "ctrlr_data": { 00:19:44.365 "cntlid": 0, 00:19:44.365 "vendor_id": "0x1b36", 00:19:44.365 "model_number": "QEMU NVMe Ctrl", 00:19:44.365 "serial_number": "12341", 00:19:44.365 "firmware_revision": "8.0.0", 00:19:44.365 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:44.365 "oacs": { 00:19:44.365 "security": 0, 00:19:44.365 "format": 1, 00:19:44.365 "firmware": 0, 00:19:44.365 "ns_manage": 1 00:19:44.365 }, 00:19:44.365 "multi_ctrlr": false, 00:19:44.365 "ana_reporting": false 00:19:44.365 }, 00:19:44.365 "vs": { 00:19:44.365 "nvme_version": "1.4" 00:19:44.365 }, 00:19:44.365 "ns_data": { 00:19:44.365 "id": 1, 00:19:44.365 "can_share": false 00:19:44.365 } 00:19:44.365 } 00:19:44.365 ], 00:19:44.365 "mp_policy": "active_passive" 00:19:44.365 } 00:19:44.365 } 00:19:44.365 ]' 00:19:44.365 09:37:12 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:44.365 09:37:12 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:19:44.365 09:37:12 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:44.365 09:37:12 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:44.365 09:37:12 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:44.365 09:37:12 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:19:44.365 09:37:12 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:19:44.365 09:37:12 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:44.365 09:37:12 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:19:44.365 09:37:12 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:44.365 09:37:12 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:44.626 09:37:12 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=11c6ec6a-00d2-4cb7-879f-0fd44f724fae 00:19:44.626 09:37:12 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:19:44.626 09:37:12 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 11c6ec6a-00d2-4cb7-879f-0fd44f724fae 00:19:44.887 09:37:12 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:45.148 09:37:12 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=8b29b896-92f5-41a6-8bf9-a7a338cd7730 00:19:45.148 09:37:12 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 8b29b896-92f5-41a6-8bf9-a7a338cd7730 00:19:45.410 09:37:12 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=1986ddcf-f204-43c5-899e-3e09bfa6ed20 00:19:45.410 09:37:12 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:19:45.410 09:37:12 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 1986ddcf-f204-43c5-899e-3e09bfa6ed20 00:19:45.410 09:37:12 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:19:45.410 09:37:12 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:45.410 09:37:12 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=1986ddcf-f204-43c5-899e-3e09bfa6ed20 00:19:45.410 09:37:12 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:19:45.410 09:37:12 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 1986ddcf-f204-43c5-899e-3e09bfa6ed20 00:19:45.410 09:37:12 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=1986ddcf-f204-43c5-899e-3e09bfa6ed20 00:19:45.410 09:37:12 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:45.410 09:37:12 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:19:45.410 09:37:12 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:19:45.410 09:37:12 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1986ddcf-f204-43c5-899e-3e09bfa6ed20 00:19:45.671 09:37:13 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:45.671 { 00:19:45.671 "name": "1986ddcf-f204-43c5-899e-3e09bfa6ed20", 00:19:45.671 "aliases": [ 00:19:45.671 "lvs/nvme0n1p0" 00:19:45.671 ], 00:19:45.671 "product_name": "Logical Volume", 00:19:45.671 "block_size": 4096, 00:19:45.671 "num_blocks": 26476544, 00:19:45.671 "uuid": "1986ddcf-f204-43c5-899e-3e09bfa6ed20", 00:19:45.671 "assigned_rate_limits": { 00:19:45.671 "rw_ios_per_sec": 0, 00:19:45.671 "rw_mbytes_per_sec": 0, 00:19:45.671 "r_mbytes_per_sec": 0, 00:19:45.671 "w_mbytes_per_sec": 0 00:19:45.671 }, 00:19:45.671 "claimed": false, 00:19:45.671 "zoned": false, 00:19:45.671 "supported_io_types": { 00:19:45.671 "read": true, 00:19:45.671 "write": true, 00:19:45.671 "unmap": true, 00:19:45.671 "flush": false, 00:19:45.671 "reset": true, 00:19:45.671 "nvme_admin": false, 00:19:45.671 "nvme_io": false, 00:19:45.671 "nvme_io_md": false, 00:19:45.671 "write_zeroes": true, 00:19:45.671 "zcopy": false, 00:19:45.671 "get_zone_info": false, 00:19:45.671 "zone_management": false, 00:19:45.671 "zone_append": false, 00:19:45.671 "compare": false, 00:19:45.671 "compare_and_write": false, 00:19:45.671 "abort": false, 00:19:45.671 "seek_hole": true, 00:19:45.671 "seek_data": true, 00:19:45.671 "copy": false, 00:19:45.671 "nvme_iov_md": false 00:19:45.671 }, 00:19:45.671 "driver_specific": { 00:19:45.671 "lvol": { 00:19:45.671 "lvol_store_uuid": "8b29b896-92f5-41a6-8bf9-a7a338cd7730", 00:19:45.671 "base_bdev": "nvme0n1", 00:19:45.671 "thin_provision": true, 00:19:45.671 "num_allocated_clusters": 0, 00:19:45.671 "snapshot": false, 00:19:45.671 "clone": false, 00:19:45.671 "esnap_clone": false 00:19:45.671 } 00:19:45.671 } 00:19:45.671 } 00:19:45.671 ]' 00:19:45.671 09:37:13 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:45.671 09:37:13 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:19:45.671 09:37:13 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:45.671 09:37:13 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:45.671 09:37:13 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:45.671 09:37:13 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:19:45.671 09:37:13 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:19:45.671 09:37:13 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:19:45.671 09:37:13 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:45.931 09:37:13 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:45.931 09:37:13 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:45.931 09:37:13 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 1986ddcf-f204-43c5-899e-3e09bfa6ed20 00:19:45.931 09:37:13 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=1986ddcf-f204-43c5-899e-3e09bfa6ed20 00:19:45.931 09:37:13 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:45.931 09:37:13 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:19:45.931 09:37:13 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:19:45.931 09:37:13 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1986ddcf-f204-43c5-899e-3e09bfa6ed20 00:19:46.192 09:37:13 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:46.192 { 00:19:46.192 "name": "1986ddcf-f204-43c5-899e-3e09bfa6ed20", 00:19:46.192 "aliases": [ 00:19:46.192 "lvs/nvme0n1p0" 00:19:46.192 ], 00:19:46.192 "product_name": "Logical Volume", 00:19:46.192 "block_size": 4096, 00:19:46.192 "num_blocks": 26476544, 00:19:46.192 "uuid": "1986ddcf-f204-43c5-899e-3e09bfa6ed20", 00:19:46.192 "assigned_rate_limits": { 00:19:46.192 "rw_ios_per_sec": 0, 00:19:46.192 "rw_mbytes_per_sec": 0, 00:19:46.192 "r_mbytes_per_sec": 0, 00:19:46.192 "w_mbytes_per_sec": 0 00:19:46.192 }, 00:19:46.192 "claimed": false, 00:19:46.192 "zoned": false, 00:19:46.192 "supported_io_types": { 00:19:46.192 "read": true, 00:19:46.192 "write": true, 00:19:46.192 "unmap": true, 00:19:46.192 "flush": false, 00:19:46.192 "reset": true, 00:19:46.192 "nvme_admin": false, 00:19:46.192 "nvme_io": false, 00:19:46.192 "nvme_io_md": false, 00:19:46.192 "write_zeroes": true, 00:19:46.192 "zcopy": false, 00:19:46.192 "get_zone_info": false, 00:19:46.192 "zone_management": false, 00:19:46.192 "zone_append": false, 00:19:46.192 "compare": false, 00:19:46.192 "compare_and_write": false, 00:19:46.192 "abort": false, 00:19:46.192 "seek_hole": true, 00:19:46.192 "seek_data": true, 00:19:46.192 "copy": false, 00:19:46.192 "nvme_iov_md": false 00:19:46.192 }, 00:19:46.192 "driver_specific": { 00:19:46.192 "lvol": { 00:19:46.192 "lvol_store_uuid": "8b29b896-92f5-41a6-8bf9-a7a338cd7730", 00:19:46.192 "base_bdev": "nvme0n1", 00:19:46.192 "thin_provision": true, 00:19:46.192 "num_allocated_clusters": 0, 00:19:46.192 "snapshot": false, 00:19:46.192 "clone": false, 00:19:46.192 "esnap_clone": false 00:19:46.192 } 00:19:46.192 } 00:19:46.192 } 00:19:46.192 ]' 00:19:46.192 09:37:13 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:46.192 09:37:13 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:19:46.192 09:37:13 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:46.192 09:37:13 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:46.192 09:37:13 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:46.192 09:37:13 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:19:46.192 09:37:13 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:19:46.192 09:37:13 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:46.451 09:37:14 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:19:46.451 09:37:14 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 1986ddcf-f204-43c5-899e-3e09bfa6ed20 00:19:46.451 09:37:14 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=1986ddcf-f204-43c5-899e-3e09bfa6ed20 00:19:46.451 09:37:14 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:46.451 09:37:14 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:19:46.451 09:37:14 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:19:46.451 09:37:14 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1986ddcf-f204-43c5-899e-3e09bfa6ed20 00:19:46.710 09:37:14 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:46.710 { 00:19:46.710 "name": "1986ddcf-f204-43c5-899e-3e09bfa6ed20", 00:19:46.710 "aliases": [ 00:19:46.710 "lvs/nvme0n1p0" 00:19:46.710 ], 00:19:46.710 "product_name": "Logical Volume", 00:19:46.710 "block_size": 4096, 00:19:46.710 "num_blocks": 26476544, 00:19:46.710 "uuid": "1986ddcf-f204-43c5-899e-3e09bfa6ed20", 00:19:46.710 "assigned_rate_limits": { 00:19:46.710 "rw_ios_per_sec": 0, 00:19:46.710 "rw_mbytes_per_sec": 0, 00:19:46.710 "r_mbytes_per_sec": 0, 00:19:46.710 "w_mbytes_per_sec": 0 00:19:46.710 }, 00:19:46.710 "claimed": false, 00:19:46.710 "zoned": false, 00:19:46.710 "supported_io_types": { 00:19:46.710 "read": true, 00:19:46.710 "write": true, 00:19:46.710 "unmap": true, 00:19:46.710 "flush": false, 00:19:46.710 "reset": true, 00:19:46.710 "nvme_admin": false, 00:19:46.710 "nvme_io": false, 00:19:46.710 "nvme_io_md": false, 00:19:46.710 "write_zeroes": true, 00:19:46.710 "zcopy": false, 00:19:46.710 "get_zone_info": false, 00:19:46.710 "zone_management": false, 00:19:46.710 "zone_append": false, 00:19:46.710 "compare": false, 00:19:46.710 "compare_and_write": false, 00:19:46.710 "abort": false, 00:19:46.710 "seek_hole": true, 00:19:46.710 "seek_data": true, 00:19:46.710 "copy": false, 00:19:46.710 "nvme_iov_md": false 00:19:46.710 }, 00:19:46.710 "driver_specific": { 00:19:46.710 "lvol": { 00:19:46.710 "lvol_store_uuid": "8b29b896-92f5-41a6-8bf9-a7a338cd7730", 00:19:46.710 "base_bdev": "nvme0n1", 00:19:46.710 "thin_provision": true, 00:19:46.710 "num_allocated_clusters": 0, 00:19:46.710 "snapshot": false, 00:19:46.710 "clone": false, 00:19:46.710 "esnap_clone": false 00:19:46.710 } 00:19:46.710 } 00:19:46.710 } 00:19:46.710 ]' 00:19:46.710 09:37:14 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:46.710 09:37:14 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:19:46.710 09:37:14 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:46.710 09:37:14 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:46.710 09:37:14 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:46.710 09:37:14 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:19:46.710 09:37:14 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:19:46.710 09:37:14 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 1986ddcf-f204-43c5-899e-3e09bfa6ed20 --l2p_dram_limit 10' 00:19:46.710 09:37:14 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:19:46.710 09:37:14 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:19:46.710 09:37:14 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:19:46.710 09:37:14 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:19:46.710 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:19:46.710 09:37:14 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 1986ddcf-f204-43c5-899e-3e09bfa6ed20 --l2p_dram_limit 10 -c nvc0n1p0 00:19:46.971 [2024-11-29 09:37:14.464785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.971 [2024-11-29 09:37:14.464823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:46.971 [2024-11-29 09:37:14.464835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:46.971 [2024-11-29 09:37:14.464842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.971 [2024-11-29 09:37:14.464888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.971 [2024-11-29 09:37:14.464898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:46.971 [2024-11-29 09:37:14.464907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:46.971 [2024-11-29 09:37:14.464913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.971 [2024-11-29 09:37:14.464929] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:46.971 [2024-11-29 09:37:14.465152] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:46.971 [2024-11-29 09:37:14.465167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.971 [2024-11-29 09:37:14.465173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:46.971 [2024-11-29 09:37:14.465180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:19:46.971 [2024-11-29 09:37:14.465188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.971 [2024-11-29 09:37:14.465212] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d7621d15-6b86-4809-9c43-c56efb3866c5 00:19:46.971 [2024-11-29 09:37:14.466164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.971 [2024-11-29 09:37:14.466186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:46.971 [2024-11-29 09:37:14.466194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:19:46.971 [2024-11-29 09:37:14.466202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.971 [2024-11-29 09:37:14.470866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.971 [2024-11-29 09:37:14.470894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:46.971 [2024-11-29 09:37:14.470903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.609 ms 00:19:46.971 [2024-11-29 09:37:14.470914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.971 [2024-11-29 09:37:14.470981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.971 [2024-11-29 09:37:14.470990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:46.971 [2024-11-29 09:37:14.470997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:46.971 [2024-11-29 09:37:14.471004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.971 [2024-11-29 09:37:14.471043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.971 [2024-11-29 09:37:14.471054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:46.971 [2024-11-29 09:37:14.471060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:46.971 [2024-11-29 09:37:14.471070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.971 [2024-11-29 09:37:14.471089] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:46.971 [2024-11-29 09:37:14.472355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.971 [2024-11-29 09:37:14.472380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:46.971 [2024-11-29 09:37:14.472390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.269 ms 00:19:46.971 [2024-11-29 09:37:14.472399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.971 [2024-11-29 09:37:14.472426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.971 [2024-11-29 09:37:14.472434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:46.971 [2024-11-29 09:37:14.472444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:46.971 [2024-11-29 09:37:14.472450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.971 [2024-11-29 09:37:14.472465] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:46.971 [2024-11-29 09:37:14.472573] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:46.971 [2024-11-29 09:37:14.472593] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:46.971 [2024-11-29 09:37:14.472602] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:46.971 [2024-11-29 09:37:14.472618] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:46.971 [2024-11-29 09:37:14.472630] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:46.971 [2024-11-29 09:37:14.472640] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:46.971 [2024-11-29 09:37:14.472646] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:46.971 [2024-11-29 09:37:14.472654] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:46.971 [2024-11-29 09:37:14.472661] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:46.971 [2024-11-29 09:37:14.472669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.971 [2024-11-29 09:37:14.472676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:46.971 [2024-11-29 09:37:14.472684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:19:46.971 [2024-11-29 09:37:14.472689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.971 [2024-11-29 09:37:14.472754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.971 [2024-11-29 09:37:14.472761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:46.971 [2024-11-29 09:37:14.472769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:19:46.971 [2024-11-29 09:37:14.472775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.971 [2024-11-29 09:37:14.472845] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:46.971 [2024-11-29 09:37:14.472852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:46.971 [2024-11-29 09:37:14.472859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:46.971 [2024-11-29 09:37:14.472864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.971 [2024-11-29 09:37:14.472871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:46.971 [2024-11-29 09:37:14.472876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:46.971 [2024-11-29 09:37:14.472882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:46.971 [2024-11-29 09:37:14.472887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:46.971 [2024-11-29 09:37:14.472894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:46.971 [2024-11-29 09:37:14.472899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:46.971 [2024-11-29 09:37:14.472906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:46.971 [2024-11-29 09:37:14.472911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:46.971 [2024-11-29 09:37:14.472919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:46.971 [2024-11-29 09:37:14.472924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:46.971 [2024-11-29 09:37:14.472930] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:46.971 [2024-11-29 09:37:14.472936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.971 [2024-11-29 09:37:14.472943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:46.971 [2024-11-29 09:37:14.472948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:46.971 [2024-11-29 09:37:14.472954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.971 [2024-11-29 09:37:14.472959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:46.971 [2024-11-29 09:37:14.472966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:46.971 [2024-11-29 09:37:14.472971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:46.971 [2024-11-29 09:37:14.472977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:46.971 [2024-11-29 09:37:14.472981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:46.971 [2024-11-29 09:37:14.472987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:46.971 [2024-11-29 09:37:14.472992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:46.971 [2024-11-29 09:37:14.472998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:46.971 [2024-11-29 09:37:14.473002] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:46.971 [2024-11-29 09:37:14.473010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:46.971 [2024-11-29 09:37:14.473015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:46.972 [2024-11-29 09:37:14.473020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:46.972 [2024-11-29 09:37:14.473025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:46.972 [2024-11-29 09:37:14.473031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:46.972 [2024-11-29 09:37:14.473036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:46.972 [2024-11-29 09:37:14.473043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:46.972 [2024-11-29 09:37:14.473047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:46.972 [2024-11-29 09:37:14.473054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:46.972 [2024-11-29 09:37:14.473059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:46.972 [2024-11-29 09:37:14.473066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:46.972 [2024-11-29 09:37:14.473071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.972 [2024-11-29 09:37:14.473077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:46.972 [2024-11-29 09:37:14.473081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:46.972 [2024-11-29 09:37:14.473087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.972 [2024-11-29 09:37:14.473092] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:46.972 [2024-11-29 09:37:14.473100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:46.972 [2024-11-29 09:37:14.473105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:46.972 [2024-11-29 09:37:14.473112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.972 [2024-11-29 09:37:14.473119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:46.972 [2024-11-29 09:37:14.473126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:46.972 [2024-11-29 09:37:14.473130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:46.972 [2024-11-29 09:37:14.473137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:46.972 [2024-11-29 09:37:14.473141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:46.972 [2024-11-29 09:37:14.473147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:46.972 [2024-11-29 09:37:14.473155] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:46.972 [2024-11-29 09:37:14.473163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:46.972 [2024-11-29 09:37:14.473169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:46.972 [2024-11-29 09:37:14.473176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:46.972 [2024-11-29 09:37:14.473181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:46.972 [2024-11-29 09:37:14.473188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:46.972 [2024-11-29 09:37:14.473193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:46.972 [2024-11-29 09:37:14.473201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:46.972 [2024-11-29 09:37:14.473207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:46.972 [2024-11-29 09:37:14.473214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:46.972 [2024-11-29 09:37:14.473218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:46.972 [2024-11-29 09:37:14.473225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:46.972 [2024-11-29 09:37:14.473230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:46.972 [2024-11-29 09:37:14.473237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:46.972 [2024-11-29 09:37:14.473242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:46.972 [2024-11-29 09:37:14.473249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:46.972 [2024-11-29 09:37:14.473254] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:46.972 [2024-11-29 09:37:14.473261] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:46.972 [2024-11-29 09:37:14.473267] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:46.972 [2024-11-29 09:37:14.473273] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:46.972 [2024-11-29 09:37:14.473279] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:46.972 [2024-11-29 09:37:14.473285] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:46.972 [2024-11-29 09:37:14.473291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.972 [2024-11-29 09:37:14.473299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:46.972 [2024-11-29 09:37:14.473304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.496 ms 00:19:46.972 [2024-11-29 09:37:14.473311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.972 [2024-11-29 09:37:14.473341] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:46.972 [2024-11-29 09:37:14.473351] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:51.180 [2024-11-29 09:37:18.201859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.180 [2024-11-29 09:37:18.202216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:51.180 [2024-11-29 09:37:18.202253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3728.500 ms 00:19:51.180 [2024-11-29 09:37:18.202280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.180 [2024-11-29 09:37:18.215888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.180 [2024-11-29 09:37:18.215955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:51.180 [2024-11-29 09:37:18.215971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.477 ms 00:19:51.180 [2024-11-29 09:37:18.215991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.180 [2024-11-29 09:37:18.216123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.180 [2024-11-29 09:37:18.216137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:51.180 [2024-11-29 09:37:18.216146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:19:51.180 [2024-11-29 09:37:18.216161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.180 [2024-11-29 09:37:18.228825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.180 [2024-11-29 09:37:18.228881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:51.180 [2024-11-29 09:37:18.228897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.621 ms 00:19:51.180 [2024-11-29 09:37:18.228908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.180 [2024-11-29 09:37:18.228942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.180 [2024-11-29 09:37:18.228959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:51.180 [2024-11-29 09:37:18.228968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:51.180 [2024-11-29 09:37:18.228978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.180 [2024-11-29 09:37:18.229505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.180 [2024-11-29 09:37:18.229533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:51.181 [2024-11-29 09:37:18.229544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.479 ms 00:19:51.181 [2024-11-29 09:37:18.229579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.181 [2024-11-29 09:37:18.229725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.181 [2024-11-29 09:37:18.229739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:51.181 [2024-11-29 09:37:18.229748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:19:51.181 [2024-11-29 09:37:18.229765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.181 [2024-11-29 09:37:18.238094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.181 [2024-11-29 09:37:18.238149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:51.181 [2024-11-29 09:37:18.238161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.308 ms 00:19:51.181 [2024-11-29 09:37:18.238172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.181 [2024-11-29 09:37:18.260374] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:51.181 [2024-11-29 09:37:18.264735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.181 [2024-11-29 09:37:18.264780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:51.181 [2024-11-29 09:37:18.264796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.452 ms 00:19:51.181 [2024-11-29 09:37:18.264804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.181 [2024-11-29 09:37:18.358978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.181 [2024-11-29 09:37:18.359044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:51.181 [2024-11-29 09:37:18.359067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.123 ms 00:19:51.181 [2024-11-29 09:37:18.359077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.181 [2024-11-29 09:37:18.359285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.181 [2024-11-29 09:37:18.359307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:51.181 [2024-11-29 09:37:18.359319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:19:51.181 [2024-11-29 09:37:18.359327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.181 [2024-11-29 09:37:18.365162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.181 [2024-11-29 09:37:18.365222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:51.181 [2024-11-29 09:37:18.365236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.806 ms 00:19:51.181 [2024-11-29 09:37:18.365245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.181 [2024-11-29 09:37:18.370389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.181 [2024-11-29 09:37:18.370446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:51.181 [2024-11-29 09:37:18.370462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.085 ms 00:19:51.181 [2024-11-29 09:37:18.370470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.181 [2024-11-29 09:37:18.370861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.181 [2024-11-29 09:37:18.370883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:51.181 [2024-11-29 09:37:18.370898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:19:51.181 [2024-11-29 09:37:18.370908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.181 [2024-11-29 09:37:18.412723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.181 [2024-11-29 09:37:18.412786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:51.181 [2024-11-29 09:37:18.412802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.783 ms 00:19:51.181 [2024-11-29 09:37:18.412811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.181 [2024-11-29 09:37:18.419574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.181 [2024-11-29 09:37:18.419638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:51.181 [2024-11-29 09:37:18.419653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.696 ms 00:19:51.181 [2024-11-29 09:37:18.419662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.181 [2024-11-29 09:37:18.425140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.181 [2024-11-29 09:37:18.425188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:51.181 [2024-11-29 09:37:18.425202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.425 ms 00:19:51.181 [2024-11-29 09:37:18.425210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.181 [2024-11-29 09:37:18.431416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.181 [2024-11-29 09:37:18.431648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:51.181 [2024-11-29 09:37:18.431678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.154 ms 00:19:51.181 [2024-11-29 09:37:18.431686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.181 [2024-11-29 09:37:18.431741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.181 [2024-11-29 09:37:18.431751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:51.181 [2024-11-29 09:37:18.431763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:51.181 [2024-11-29 09:37:18.431772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.181 [2024-11-29 09:37:18.431858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.181 [2024-11-29 09:37:18.431868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:51.181 [2024-11-29 09:37:18.431883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:19:51.181 [2024-11-29 09:37:18.431891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.181 [2024-11-29 09:37:18.433233] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3967.890 ms, result 0 00:19:51.181 { 00:19:51.181 "name": "ftl0", 00:19:51.181 "uuid": "d7621d15-6b86-4809-9c43-c56efb3866c5" 00:19:51.181 } 00:19:51.181 09:37:18 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:19:51.181 09:37:18 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:51.181 09:37:18 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:19:51.181 09:37:18 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:51.181 [2024-11-29 09:37:18.882740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.181 [2024-11-29 09:37:18.882939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:51.181 [2024-11-29 09:37:18.882959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:51.181 [2024-11-29 09:37:18.882970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.181 [2024-11-29 09:37:18.883001] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:51.181 [2024-11-29 09:37:18.883731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.181 [2024-11-29 09:37:18.883776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:51.181 [2024-11-29 09:37:18.883791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.708 ms 00:19:51.181 [2024-11-29 09:37:18.883800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.181 [2024-11-29 09:37:18.884062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.181 [2024-11-29 09:37:18.884074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:51.181 [2024-11-29 09:37:18.884086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.235 ms 00:19:51.181 [2024-11-29 09:37:18.884095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.181 [2024-11-29 09:37:18.887519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.181 [2024-11-29 09:37:18.887643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:51.181 [2024-11-29 09:37:18.887713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.398 ms 00:19:51.181 [2024-11-29 09:37:18.887739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.181 [2024-11-29 09:37:18.894185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.181 [2024-11-29 09:37:18.894322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:51.181 [2024-11-29 09:37:18.894396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.403 ms 00:19:51.181 [2024-11-29 09:37:18.894426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.181 [2024-11-29 09:37:18.897183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.181 [2024-11-29 09:37:18.897334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:51.181 [2024-11-29 09:37:18.897395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.651 ms 00:19:51.181 [2024-11-29 09:37:18.897418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.444 [2024-11-29 09:37:18.904150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.444 [2024-11-29 09:37:18.904318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:51.444 [2024-11-29 09:37:18.904395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.600 ms 00:19:51.444 [2024-11-29 09:37:18.904423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.444 [2024-11-29 09:37:18.904615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.444 [2024-11-29 09:37:18.904840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:51.444 [2024-11-29 09:37:18.904858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:19:51.444 [2024-11-29 09:37:18.904868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.444 [2024-11-29 09:37:18.908098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.444 [2024-11-29 09:37:18.908238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:51.444 [2024-11-29 09:37:18.908295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.197 ms 00:19:51.444 [2024-11-29 09:37:18.908317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.444 [2024-11-29 09:37:18.910929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.444 [2024-11-29 09:37:18.911076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:51.444 [2024-11-29 09:37:18.911131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.529 ms 00:19:51.444 [2024-11-29 09:37:18.911153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.444 [2024-11-29 09:37:18.913285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.444 [2024-11-29 09:37:18.913425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:51.444 [2024-11-29 09:37:18.913482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.079 ms 00:19:51.444 [2024-11-29 09:37:18.913503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.444 [2024-11-29 09:37:18.915837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.444 [2024-11-29 09:37:18.915972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:51.444 [2024-11-29 09:37:18.915992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.179 ms 00:19:51.444 [2024-11-29 09:37:18.916000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.444 [2024-11-29 09:37:18.916037] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:51.444 [2024-11-29 09:37:18.916053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:51.444 [2024-11-29 09:37:18.916066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:51.444 [2024-11-29 09:37:18.916074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:51.444 [2024-11-29 09:37:18.916087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:51.444 [2024-11-29 09:37:18.916095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:51.444 [2024-11-29 09:37:18.916106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:51.444 [2024-11-29 09:37:18.916114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:51.444 [2024-11-29 09:37:18.916127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:51.444 [2024-11-29 09:37:18.916136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:51.444 [2024-11-29 09:37:18.916146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:51.444 [2024-11-29 09:37:18.916153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:51.444 [2024-11-29 09:37:18.916163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:51.444 [2024-11-29 09:37:18.916170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:51.444 [2024-11-29 09:37:18.916180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:51.444 [2024-11-29 09:37:18.916187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:51.444 [2024-11-29 09:37:18.916197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:51.444 [2024-11-29 09:37:18.916204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:51.444 [2024-11-29 09:37:18.916214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:51.444 [2024-11-29 09:37:18.916221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:51.444 [2024-11-29 09:37:18.916232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:51.444 [2024-11-29 09:37:18.916240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:51.444 [2024-11-29 09:37:18.916249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:51.444 [2024-11-29 09:37:18.916257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:51.444 [2024-11-29 09:37:18.916267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:51.444 [2024-11-29 09:37:18.916274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:51.444 [2024-11-29 09:37:18.916283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:51.444 [2024-11-29 09:37:18.916290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:51.444 [2024-11-29 09:37:18.916300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:51.444 [2024-11-29 09:37:18.916308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:51.444 [2024-11-29 09:37:18.916318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:51.444 [2024-11-29 09:37:18.916328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:51.444 [2024-11-29 09:37:18.916338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:51.444 [2024-11-29 09:37:18.916345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:51.444 [2024-11-29 09:37:18.916354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:51.444 [2024-11-29 09:37:18.916362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:51.444 [2024-11-29 09:37:18.916374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:51.444 [2024-11-29 09:37:18.916382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:51.444 [2024-11-29 09:37:18.916392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:51.444 [2024-11-29 09:37:18.916399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:51.444 [2024-11-29 09:37:18.916409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:51.444 [2024-11-29 09:37:18.916416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:51.444 [2024-11-29 09:37:18.916427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:51.445 [2024-11-29 09:37:18.916434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:51.445 [2024-11-29 09:37:18.916444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:51.445 [2024-11-29 09:37:18.916451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:51.445 [2024-11-29 09:37:18.916460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:51.445 [2024-11-29 09:37:18.916467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:51.445 [2024-11-29 09:37:18.916476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:51.445 [2024-11-29 09:37:18.916484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:51.445 [2024-11-29 09:37:18.916493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:51.445 [2024-11-29 09:37:18.916500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:51.445 [2024-11-29 09:37:18.916511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:51.445 [2024-11-29 09:37:18.916518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:51.445 [2024-11-29 09:37:18.916527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:51.445 [2024-11-29 09:37:18.916535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:51.445 [2024-11-29 09:37:18.916544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:51.445 [2024-11-29 09:37:18.916552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:51.445 [2024-11-29 09:37:18.916561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:51.445 [2024-11-29 09:37:18.916569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:51.445 [2024-11-29 09:37:18.916577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:51.445 [2024-11-29 09:37:18.916606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:51.445 [2024-11-29 09:37:18.916625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:51.445 [2024-11-29 09:37:18.916635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:51.445 [2024-11-29 09:37:18.916645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:51.445 [2024-11-29 09:37:18.916653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:51.445 [2024-11-29 09:37:18.916663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:51.445 [2024-11-29 09:37:18.916671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:51.445 [2024-11-29 09:37:18.916683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:51.445 [2024-11-29 09:37:18.916691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:51.445 [2024-11-29 09:37:18.916700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:51.445 [2024-11-29 09:37:18.916708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:51.445 [2024-11-29 09:37:18.916718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:51.445 [2024-11-29 09:37:18.916726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:51.445 [2024-11-29 09:37:18.916736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:51.445 [2024-11-29 09:37:18.916744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:51.445 [2024-11-29 09:37:18.916753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:51.445 [2024-11-29 09:37:18.916761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:51.445 [2024-11-29 09:37:18.916771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:51.445 [2024-11-29 09:37:18.916778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:51.445 [2024-11-29 09:37:18.916787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:51.445 [2024-11-29 09:37:18.916795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:51.445 [2024-11-29 09:37:18.916804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:51.445 [2024-11-29 09:37:18.916811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:51.445 [2024-11-29 09:37:18.916823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:51.445 [2024-11-29 09:37:18.916831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:51.445 [2024-11-29 09:37:18.916842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:51.445 [2024-11-29 09:37:18.916850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:51.445 [2024-11-29 09:37:18.916859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:51.445 [2024-11-29 09:37:18.916866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:51.445 [2024-11-29 09:37:18.916876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:51.445 [2024-11-29 09:37:18.916883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:51.445 [2024-11-29 09:37:18.916892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:51.445 [2024-11-29 09:37:18.916899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:51.445 [2024-11-29 09:37:18.916909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:51.445 [2024-11-29 09:37:18.916926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:51.445 [2024-11-29 09:37:18.916936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:51.445 [2024-11-29 09:37:18.916943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:51.445 [2024-11-29 09:37:18.916953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:51.445 [2024-11-29 09:37:18.916961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:51.445 [2024-11-29 09:37:18.916973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:51.445 [2024-11-29 09:37:18.916989] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:51.445 [2024-11-29 09:37:18.916998] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d7621d15-6b86-4809-9c43-c56efb3866c5 00:19:51.445 [2024-11-29 09:37:18.917012] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:51.445 [2024-11-29 09:37:18.917022] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:51.445 [2024-11-29 09:37:18.917029] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:51.445 [2024-11-29 09:37:18.917042] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:51.445 [2024-11-29 09:37:18.917049] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:51.445 [2024-11-29 09:37:18.917059] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:51.445 [2024-11-29 09:37:18.917067] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:51.445 [2024-11-29 09:37:18.917076] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:51.445 [2024-11-29 09:37:18.917082] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:51.445 [2024-11-29 09:37:18.917092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.445 [2024-11-29 09:37:18.917100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:51.445 [2024-11-29 09:37:18.917110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.056 ms 00:19:51.445 [2024-11-29 09:37:18.917118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.445 [2024-11-29 09:37:18.919564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.445 [2024-11-29 09:37:18.919708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:51.445 [2024-11-29 09:37:18.919779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.415 ms 00:19:51.445 [2024-11-29 09:37:18.919886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.445 [2024-11-29 09:37:18.920024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.445 [2024-11-29 09:37:18.920054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:51.445 [2024-11-29 09:37:18.920130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:19:51.445 [2024-11-29 09:37:18.920192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.445 [2024-11-29 09:37:18.928011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.445 [2024-11-29 09:37:18.928068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:51.445 [2024-11-29 09:37:18.928083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.445 [2024-11-29 09:37:18.928090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.445 [2024-11-29 09:37:18.928161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.445 [2024-11-29 09:37:18.928170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:51.445 [2024-11-29 09:37:18.928180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.445 [2024-11-29 09:37:18.928188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.445 [2024-11-29 09:37:18.928253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.445 [2024-11-29 09:37:18.928264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:51.445 [2024-11-29 09:37:18.928277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.445 [2024-11-29 09:37:18.928284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.445 [2024-11-29 09:37:18.928304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.445 [2024-11-29 09:37:18.928311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:51.445 [2024-11-29 09:37:18.928327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.445 [2024-11-29 09:37:18.928334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.445 [2024-11-29 09:37:18.942731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.446 [2024-11-29 09:37:18.942793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:51.446 [2024-11-29 09:37:18.942808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.446 [2024-11-29 09:37:18.942817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.446 [2024-11-29 09:37:18.954399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.446 [2024-11-29 09:37:18.954460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:51.446 [2024-11-29 09:37:18.954476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.446 [2024-11-29 09:37:18.954485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.446 [2024-11-29 09:37:18.954640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.446 [2024-11-29 09:37:18.954652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:51.446 [2024-11-29 09:37:18.954664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.446 [2024-11-29 09:37:18.954679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.446 [2024-11-29 09:37:18.954730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.446 [2024-11-29 09:37:18.954741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:51.446 [2024-11-29 09:37:18.954753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.446 [2024-11-29 09:37:18.954763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.446 [2024-11-29 09:37:18.954844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.446 [2024-11-29 09:37:18.954855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:51.446 [2024-11-29 09:37:18.954867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.446 [2024-11-29 09:37:18.954875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.446 [2024-11-29 09:37:18.954911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.446 [2024-11-29 09:37:18.954921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:51.446 [2024-11-29 09:37:18.954932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.446 [2024-11-29 09:37:18.954940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.446 [2024-11-29 09:37:18.954985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.446 [2024-11-29 09:37:18.954996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:51.446 [2024-11-29 09:37:18.955006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.446 [2024-11-29 09:37:18.955014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.446 [2024-11-29 09:37:18.955065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.446 [2024-11-29 09:37:18.955076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:51.446 [2024-11-29 09:37:18.955087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.446 [2024-11-29 09:37:18.955094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.446 [2024-11-29 09:37:18.955249] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 72.451 ms, result 0 00:19:51.446 true 00:19:51.446 09:37:18 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 90034 00:19:51.446 09:37:18 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 90034 ']' 00:19:51.446 09:37:18 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 90034 00:19:51.446 09:37:18 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:19:51.446 09:37:18 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:51.446 09:37:18 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90034 00:19:51.446 killing process with pid 90034 00:19:51.446 09:37:19 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:51.446 09:37:19 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:51.446 09:37:19 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90034' 00:19:51.446 09:37:19 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 90034 00:19:51.446 09:37:19 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 90034 00:19:53.990 09:37:21 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:19:59.273 262144+0 records in 00:19:59.273 262144+0 records out 00:19:59.273 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.28044 s, 251 MB/s 00:19:59.273 09:37:25 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:20:00.210 09:37:27 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:00.210 [2024-11-29 09:37:27.637894] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:20:00.211 [2024-11-29 09:37:27.638020] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90242 ] 00:20:00.211 [2024-11-29 09:37:27.771981] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:00.211 [2024-11-29 09:37:27.803457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.211 [2024-11-29 09:37:27.822468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.211 [2024-11-29 09:37:27.910616] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:00.211 [2024-11-29 09:37:27.910690] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:00.475 [2024-11-29 09:37:28.067921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.475 [2024-11-29 09:37:28.068103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:00.475 [2024-11-29 09:37:28.068124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:00.475 [2024-11-29 09:37:28.068132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.475 [2024-11-29 09:37:28.068189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.475 [2024-11-29 09:37:28.068200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:00.475 [2024-11-29 09:37:28.068208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:00.475 [2024-11-29 09:37:28.068218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.475 [2024-11-29 09:37:28.068237] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:00.475 [2024-11-29 09:37:28.068482] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:00.475 [2024-11-29 09:37:28.068497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.475 [2024-11-29 09:37:28.068507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:00.475 [2024-11-29 09:37:28.068515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:20:00.475 [2024-11-29 09:37:28.068526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.475 [2024-11-29 09:37:28.069603] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:00.475 [2024-11-29 09:37:28.072223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.475 [2024-11-29 09:37:28.072260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:00.475 [2024-11-29 09:37:28.072282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.639 ms 00:20:00.475 [2024-11-29 09:37:28.072290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.475 [2024-11-29 09:37:28.072355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.475 [2024-11-29 09:37:28.072365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:00.475 [2024-11-29 09:37:28.072373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:20:00.475 [2024-11-29 09:37:28.072381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.475 [2024-11-29 09:37:28.077273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.475 [2024-11-29 09:37:28.077304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:00.475 [2024-11-29 09:37:28.077313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.833 ms 00:20:00.475 [2024-11-29 09:37:28.077320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.475 [2024-11-29 09:37:28.077404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.475 [2024-11-29 09:37:28.077413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:00.475 [2024-11-29 09:37:28.077427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:20:00.475 [2024-11-29 09:37:28.077434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.475 [2024-11-29 09:37:28.077473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.475 [2024-11-29 09:37:28.077482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:00.475 [2024-11-29 09:37:28.077496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:00.475 [2024-11-29 09:37:28.077506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.475 [2024-11-29 09:37:28.077527] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:00.475 [2024-11-29 09:37:28.078859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.475 [2024-11-29 09:37:28.078883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:00.475 [2024-11-29 09:37:28.078893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.336 ms 00:20:00.475 [2024-11-29 09:37:28.078901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.475 [2024-11-29 09:37:28.078928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.475 [2024-11-29 09:37:28.078937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:00.475 [2024-11-29 09:37:28.078954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:00.475 [2024-11-29 09:37:28.078962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.475 [2024-11-29 09:37:28.078984] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:00.475 [2024-11-29 09:37:28.079002] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:00.475 [2024-11-29 09:37:28.079040] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:00.475 [2024-11-29 09:37:28.079209] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:00.475 [2024-11-29 09:37:28.079346] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:00.475 [2024-11-29 09:37:28.079366] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:00.475 [2024-11-29 09:37:28.079381] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:00.475 [2024-11-29 09:37:28.079395] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:00.475 [2024-11-29 09:37:28.079408] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:00.475 [2024-11-29 09:37:28.079420] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:00.475 [2024-11-29 09:37:28.079430] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:00.475 [2024-11-29 09:37:28.079442] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:00.475 [2024-11-29 09:37:28.079453] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:00.475 [2024-11-29 09:37:28.079464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.475 [2024-11-29 09:37:28.079475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:00.475 [2024-11-29 09:37:28.079486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.485 ms 00:20:00.475 [2024-11-29 09:37:28.079502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.475 [2024-11-29 09:37:28.079629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.475 [2024-11-29 09:37:28.079642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:00.475 [2024-11-29 09:37:28.079652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:20:00.475 [2024-11-29 09:37:28.079663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.475 [2024-11-29 09:37:28.079795] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:00.475 [2024-11-29 09:37:28.079813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:00.475 [2024-11-29 09:37:28.079828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:00.475 [2024-11-29 09:37:28.079839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.475 [2024-11-29 09:37:28.079852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:00.475 [2024-11-29 09:37:28.079862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:00.475 [2024-11-29 09:37:28.079878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:00.475 [2024-11-29 09:37:28.079888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:00.475 [2024-11-29 09:37:28.079899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:00.475 [2024-11-29 09:37:28.079908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:00.475 [2024-11-29 09:37:28.079921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:00.475 [2024-11-29 09:37:28.079931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:00.475 [2024-11-29 09:37:28.079941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:00.475 [2024-11-29 09:37:28.079950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:00.475 [2024-11-29 09:37:28.079960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:00.475 [2024-11-29 09:37:28.079970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.475 [2024-11-29 09:37:28.079981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:00.475 [2024-11-29 09:37:28.079992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:00.475 [2024-11-29 09:37:28.080001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.475 [2024-11-29 09:37:28.080011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:00.475 [2024-11-29 09:37:28.080021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:00.475 [2024-11-29 09:37:28.080031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:00.476 [2024-11-29 09:37:28.080040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:00.476 [2024-11-29 09:37:28.080050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:00.476 [2024-11-29 09:37:28.080060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:00.476 [2024-11-29 09:37:28.080070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:00.476 [2024-11-29 09:37:28.080082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:00.476 [2024-11-29 09:37:28.080089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:00.476 [2024-11-29 09:37:28.080096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:00.476 [2024-11-29 09:37:28.080102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:00.476 [2024-11-29 09:37:28.080108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:00.476 [2024-11-29 09:37:28.080115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:00.476 [2024-11-29 09:37:28.080121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:00.476 [2024-11-29 09:37:28.080127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:00.476 [2024-11-29 09:37:28.080134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:00.476 [2024-11-29 09:37:28.080141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:00.476 [2024-11-29 09:37:28.080147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:00.476 [2024-11-29 09:37:28.080153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:00.476 [2024-11-29 09:37:28.080160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:00.476 [2024-11-29 09:37:28.080166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.476 [2024-11-29 09:37:28.080172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:00.476 [2024-11-29 09:37:28.080178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:00.476 [2024-11-29 09:37:28.080186] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.476 [2024-11-29 09:37:28.080192] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:00.476 [2024-11-29 09:37:28.080203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:00.476 [2024-11-29 09:37:28.080210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:00.476 [2024-11-29 09:37:28.080218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.476 [2024-11-29 09:37:28.080226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:00.476 [2024-11-29 09:37:28.080233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:00.476 [2024-11-29 09:37:28.080239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:00.476 [2024-11-29 09:37:28.080246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:00.476 [2024-11-29 09:37:28.080252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:00.476 [2024-11-29 09:37:28.080259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:00.476 [2024-11-29 09:37:28.080267] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:00.476 [2024-11-29 09:37:28.080276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:00.476 [2024-11-29 09:37:28.080284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:00.476 [2024-11-29 09:37:28.080291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:00.476 [2024-11-29 09:37:28.080298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:00.476 [2024-11-29 09:37:28.080306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:00.476 [2024-11-29 09:37:28.080314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:00.476 [2024-11-29 09:37:28.080320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:00.476 [2024-11-29 09:37:28.080327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:00.476 [2024-11-29 09:37:28.080334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:00.476 [2024-11-29 09:37:28.080341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:00.476 [2024-11-29 09:37:28.080347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:00.476 [2024-11-29 09:37:28.080354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:00.476 [2024-11-29 09:37:28.080360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:00.476 [2024-11-29 09:37:28.080367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:00.476 [2024-11-29 09:37:28.080375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:00.476 [2024-11-29 09:37:28.080381] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:00.476 [2024-11-29 09:37:28.080389] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:00.476 [2024-11-29 09:37:28.080397] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:00.476 [2024-11-29 09:37:28.080404] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:00.476 [2024-11-29 09:37:28.080411] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:00.476 [2024-11-29 09:37:28.080420] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:00.476 [2024-11-29 09:37:28.080428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.476 [2024-11-29 09:37:28.080436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:00.476 [2024-11-29 09:37:28.080447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.722 ms 00:20:00.476 [2024-11-29 09:37:28.080457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.476 [2024-11-29 09:37:28.089299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.476 [2024-11-29 09:37:28.089335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:00.476 [2024-11-29 09:37:28.089345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.795 ms 00:20:00.476 [2024-11-29 09:37:28.089357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.476 [2024-11-29 09:37:28.089437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.476 [2024-11-29 09:37:28.089445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:00.476 [2024-11-29 09:37:28.089453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:20:00.476 [2024-11-29 09:37:28.089460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.476 [2024-11-29 09:37:28.107228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.476 [2024-11-29 09:37:28.107270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:00.476 [2024-11-29 09:37:28.107283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.720 ms 00:20:00.476 [2024-11-29 09:37:28.107297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.476 [2024-11-29 09:37:28.107334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.476 [2024-11-29 09:37:28.107343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:00.476 [2024-11-29 09:37:28.107351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:00.476 [2024-11-29 09:37:28.107364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.476 [2024-11-29 09:37:28.107735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.476 [2024-11-29 09:37:28.107759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:00.476 [2024-11-29 09:37:28.107770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:20:00.476 [2024-11-29 09:37:28.107778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.476 [2024-11-29 09:37:28.107906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.476 [2024-11-29 09:37:28.107916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:00.476 [2024-11-29 09:37:28.107924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:20:00.476 [2024-11-29 09:37:28.107933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.476 [2024-11-29 09:37:28.113249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.476 [2024-11-29 09:37:28.113284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:00.476 [2024-11-29 09:37:28.113295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.293 ms 00:20:00.476 [2024-11-29 09:37:28.113311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.476 [2024-11-29 09:37:28.116054] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:20:00.476 [2024-11-29 09:37:28.116093] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:00.476 [2024-11-29 09:37:28.116105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.476 [2024-11-29 09:37:28.116113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:00.476 [2024-11-29 09:37:28.116122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.715 ms 00:20:00.476 [2024-11-29 09:37:28.116130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.476 [2024-11-29 09:37:28.131303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.476 [2024-11-29 09:37:28.131336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:00.476 [2024-11-29 09:37:28.131346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.127 ms 00:20:00.476 [2024-11-29 09:37:28.131353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.476 [2024-11-29 09:37:28.133247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.476 [2024-11-29 09:37:28.133278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:00.476 [2024-11-29 09:37:28.133286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.847 ms 00:20:00.476 [2024-11-29 09:37:28.133293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.476 [2024-11-29 09:37:28.135045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.476 [2024-11-29 09:37:28.135074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:00.477 [2024-11-29 09:37:28.135082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.722 ms 00:20:00.477 [2024-11-29 09:37:28.135090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.477 [2024-11-29 09:37:28.135396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.477 [2024-11-29 09:37:28.135406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:00.477 [2024-11-29 09:37:28.135414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:20:00.477 [2024-11-29 09:37:28.135421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.477 [2024-11-29 09:37:28.151245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.477 [2024-11-29 09:37:28.151287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:00.477 [2024-11-29 09:37:28.151298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.806 ms 00:20:00.477 [2024-11-29 09:37:28.151306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.477 [2024-11-29 09:37:28.158752] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:00.477 [2024-11-29 09:37:28.161021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.477 [2024-11-29 09:37:28.161053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:00.477 [2024-11-29 09:37:28.161064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.678 ms 00:20:00.477 [2024-11-29 09:37:28.161076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.477 [2024-11-29 09:37:28.161122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.477 [2024-11-29 09:37:28.161137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:00.477 [2024-11-29 09:37:28.161150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:00.477 [2024-11-29 09:37:28.161158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.477 [2024-11-29 09:37:28.161237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.477 [2024-11-29 09:37:28.161247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:00.477 [2024-11-29 09:37:28.161314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:20:00.477 [2024-11-29 09:37:28.161322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.477 [2024-11-29 09:37:28.161343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.477 [2024-11-29 09:37:28.161351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:00.477 [2024-11-29 09:37:28.161359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:00.477 [2024-11-29 09:37:28.161365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.477 [2024-11-29 09:37:28.161396] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:00.477 [2024-11-29 09:37:28.161408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.477 [2024-11-29 09:37:28.161416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:00.477 [2024-11-29 09:37:28.161423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:00.477 [2024-11-29 09:37:28.161432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.477 [2024-11-29 09:37:28.165358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.477 [2024-11-29 09:37:28.165495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:00.477 [2024-11-29 09:37:28.165519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.910 ms 00:20:00.477 [2024-11-29 09:37:28.165527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.477 [2024-11-29 09:37:28.165613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.477 [2024-11-29 09:37:28.165624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:00.477 [2024-11-29 09:37:28.165637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:20:00.477 [2024-11-29 09:37:28.165647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.477 [2024-11-29 09:37:28.166527] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 98.234 ms, result 0 00:20:01.862  [2024-11-29T09:37:30.522Z] Copying: 12/1024 [MB] (12 MBps) [2024-11-29T09:37:31.461Z] Copying: 40/1024 [MB] (27 MBps) [2024-11-29T09:37:32.431Z] Copying: 68/1024 [MB] (27 MBps) [2024-11-29T09:37:33.452Z] Copying: 89/1024 [MB] (21 MBps) [2024-11-29T09:37:34.396Z] Copying: 107/1024 [MB] (17 MBps) [2024-11-29T09:37:35.339Z] Copying: 123/1024 [MB] (16 MBps) [2024-11-29T09:37:36.279Z] Copying: 136/1024 [MB] (13 MBps) [2024-11-29T09:37:37.220Z] Copying: 158/1024 [MB] (21 MBps) [2024-11-29T09:37:38.608Z] Copying: 184/1024 [MB] (25 MBps) [2024-11-29T09:37:39.181Z] Copying: 205/1024 [MB] (21 MBps) [2024-11-29T09:37:40.571Z] Copying: 220/1024 [MB] (14 MBps) [2024-11-29T09:37:41.514Z] Copying: 232/1024 [MB] (12 MBps) [2024-11-29T09:37:42.453Z] Copying: 243/1024 [MB] (10 MBps) [2024-11-29T09:37:43.392Z] Copying: 263/1024 [MB] (19 MBps) [2024-11-29T09:37:44.337Z] Copying: 291/1024 [MB] (28 MBps) [2024-11-29T09:37:45.288Z] Copying: 321/1024 [MB] (30 MBps) [2024-11-29T09:37:46.242Z] Copying: 337/1024 [MB] (15 MBps) [2024-11-29T09:37:47.185Z] Copying: 349/1024 [MB] (12 MBps) [2024-11-29T09:37:48.561Z] Copying: 369/1024 [MB] (19 MBps) [2024-11-29T09:37:49.495Z] Copying: 399/1024 [MB] (30 MBps) [2024-11-29T09:37:50.439Z] Copying: 452/1024 [MB] (53 MBps) [2024-11-29T09:37:51.382Z] Copying: 477/1024 [MB] (24 MBps) [2024-11-29T09:37:52.328Z] Copying: 499/1024 [MB] (22 MBps) [2024-11-29T09:37:53.270Z] Copying: 518/1024 [MB] (18 MBps) [2024-11-29T09:37:54.214Z] Copying: 532/1024 [MB] (13 MBps) [2024-11-29T09:37:55.599Z] Copying: 551/1024 [MB] (19 MBps) [2024-11-29T09:37:56.542Z] Copying: 568/1024 [MB] (16 MBps) [2024-11-29T09:37:57.485Z] Copying: 582/1024 [MB] (14 MBps) [2024-11-29T09:37:58.429Z] Copying: 597/1024 [MB] (15 MBps) [2024-11-29T09:37:59.375Z] Copying: 609/1024 [MB] (11 MBps) [2024-11-29T09:38:00.318Z] Copying: 632/1024 [MB] (23 MBps) [2024-11-29T09:38:01.262Z] Copying: 646/1024 [MB] (13 MBps) [2024-11-29T09:38:02.208Z] Copying: 659/1024 [MB] (12 MBps) [2024-11-29T09:38:03.598Z] Copying: 670/1024 [MB] (11 MBps) [2024-11-29T09:38:04.543Z] Copying: 689/1024 [MB] (18 MBps) [2024-11-29T09:38:05.561Z] Copying: 703/1024 [MB] (13 MBps) [2024-11-29T09:38:06.505Z] Copying: 713/1024 [MB] (10 MBps) [2024-11-29T09:38:07.450Z] Copying: 724/1024 [MB] (10 MBps) [2024-11-29T09:38:08.395Z] Copying: 734/1024 [MB] (10 MBps) [2024-11-29T09:38:09.341Z] Copying: 744/1024 [MB] (10 MBps) [2024-11-29T09:38:10.280Z] Copying: 755/1024 [MB] (10 MBps) [2024-11-29T09:38:11.221Z] Copying: 785/1024 [MB] (29 MBps) [2024-11-29T09:38:12.601Z] Copying: 812/1024 [MB] (26 MBps) [2024-11-29T09:38:13.543Z] Copying: 842/1024 [MB] (30 MBps) [2024-11-29T09:38:14.486Z] Copying: 857/1024 [MB] (14 MBps) [2024-11-29T09:38:15.429Z] Copying: 869/1024 [MB] (12 MBps) [2024-11-29T09:38:16.369Z] Copying: 884/1024 [MB] (14 MBps) [2024-11-29T09:38:17.311Z] Copying: 896/1024 [MB] (12 MBps) [2024-11-29T09:38:18.249Z] Copying: 907/1024 [MB] (10 MBps) [2024-11-29T09:38:19.182Z] Copying: 922/1024 [MB] (15 MBps) [2024-11-29T09:38:20.568Z] Copying: 961/1024 [MB] (39 MBps) [2024-11-29T09:38:21.512Z] Copying: 981/1024 [MB] (20 MBps) [2024-11-29T09:38:22.458Z] Copying: 992/1024 [MB] (10 MBps) [2024-11-29T09:38:23.032Z] Copying: 1009/1024 [MB] (17 MBps) [2024-11-29T09:38:23.032Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-11-29 09:38:22.781561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.306 [2024-11-29 09:38:22.781665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:55.306 [2024-11-29 09:38:22.781683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:55.306 [2024-11-29 09:38:22.781703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.306 [2024-11-29 09:38:22.781727] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:55.306 [2024-11-29 09:38:22.782506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.306 [2024-11-29 09:38:22.782531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:55.306 [2024-11-29 09:38:22.782543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.763 ms 00:20:55.306 [2024-11-29 09:38:22.782551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.306 [2024-11-29 09:38:22.784751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.306 [2024-11-29 09:38:22.784800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:55.306 [2024-11-29 09:38:22.784811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.175 ms 00:20:55.306 [2024-11-29 09:38:22.784833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.306 [2024-11-29 09:38:22.802778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.306 [2024-11-29 09:38:22.802844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:55.306 [2024-11-29 09:38:22.802857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.927 ms 00:20:55.306 [2024-11-29 09:38:22.802865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.306 [2024-11-29 09:38:22.809053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.306 [2024-11-29 09:38:22.809099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:55.306 [2024-11-29 09:38:22.809110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.142 ms 00:20:55.306 [2024-11-29 09:38:22.809118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.306 [2024-11-29 09:38:22.812033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.306 [2024-11-29 09:38:22.812239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:55.306 [2024-11-29 09:38:22.812258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.852 ms 00:20:55.306 [2024-11-29 09:38:22.812266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.306 [2024-11-29 09:38:22.817267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.306 [2024-11-29 09:38:22.817326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:55.306 [2024-11-29 09:38:22.817349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.939 ms 00:20:55.306 [2024-11-29 09:38:22.817358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.306 [2024-11-29 09:38:22.817487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.306 [2024-11-29 09:38:22.817497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:55.306 [2024-11-29 09:38:22.817506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:20:55.306 [2024-11-29 09:38:22.817514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.306 [2024-11-29 09:38:22.820995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.306 [2024-11-29 09:38:22.821048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:55.306 [2024-11-29 09:38:22.821072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.459 ms 00:20:55.306 [2024-11-29 09:38:22.821079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.306 [2024-11-29 09:38:22.824157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.306 [2024-11-29 09:38:22.824208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:55.306 [2024-11-29 09:38:22.824218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.032 ms 00:20:55.306 [2024-11-29 09:38:22.824225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.306 [2024-11-29 09:38:22.826669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.306 [2024-11-29 09:38:22.826850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:55.306 [2024-11-29 09:38:22.826868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.398 ms 00:20:55.306 [2024-11-29 09:38:22.826876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.306 [2024-11-29 09:38:22.829034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.306 [2024-11-29 09:38:22.829090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:55.306 [2024-11-29 09:38:22.829099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.050 ms 00:20:55.306 [2024-11-29 09:38:22.829107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.306 [2024-11-29 09:38:22.829150] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:55.306 [2024-11-29 09:38:22.829165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:55.306 [2024-11-29 09:38:22.829175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:55.306 [2024-11-29 09:38:22.829183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:55.306 [2024-11-29 09:38:22.829191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:55.306 [2024-11-29 09:38:22.829198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:55.306 [2024-11-29 09:38:22.829206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:55.306 [2024-11-29 09:38:22.829214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:55.306 [2024-11-29 09:38:22.829221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:55.306 [2024-11-29 09:38:22.829228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:55.306 [2024-11-29 09:38:22.829237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:55.307 [2024-11-29 09:38:22.829990] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:55.307 [2024-11-29 09:38:22.829999] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d7621d15-6b86-4809-9c43-c56efb3866c5 00:20:55.307 [2024-11-29 09:38:22.830008] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:55.307 [2024-11-29 09:38:22.830015] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:55.307 [2024-11-29 09:38:22.830023] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:55.307 [2024-11-29 09:38:22.830032] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:55.307 [2024-11-29 09:38:22.830039] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:55.307 [2024-11-29 09:38:22.830047] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:55.307 [2024-11-29 09:38:22.830054] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:55.307 [2024-11-29 09:38:22.830060] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:55.307 [2024-11-29 09:38:22.830067] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:55.307 [2024-11-29 09:38:22.830074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.307 [2024-11-29 09:38:22.830097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:55.307 [2024-11-29 09:38:22.830114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.925 ms 00:20:55.307 [2024-11-29 09:38:22.830122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.307 [2024-11-29 09:38:22.832828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.307 [2024-11-29 09:38:22.832993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:55.307 [2024-11-29 09:38:22.833056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.685 ms 00:20:55.307 [2024-11-29 09:38:22.833080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.307 [2024-11-29 09:38:22.833239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.307 [2024-11-29 09:38:22.833269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:55.307 [2024-11-29 09:38:22.833290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:20:55.307 [2024-11-29 09:38:22.833310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.307 [2024-11-29 09:38:22.841234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:55.307 [2024-11-29 09:38:22.841404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:55.307 [2024-11-29 09:38:22.841459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:55.307 [2024-11-29 09:38:22.841481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.307 [2024-11-29 09:38:22.841560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:55.307 [2024-11-29 09:38:22.841582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:55.307 [2024-11-29 09:38:22.841646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:55.307 [2024-11-29 09:38:22.841666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.307 [2024-11-29 09:38:22.841879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:55.307 [2024-11-29 09:38:22.841919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:55.307 [2024-11-29 09:38:22.841941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:55.307 [2024-11-29 09:38:22.841961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.307 [2024-11-29 09:38:22.841998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:55.307 [2024-11-29 09:38:22.842022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:55.307 [2024-11-29 09:38:22.842042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:55.307 [2024-11-29 09:38:22.842112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.308 [2024-11-29 09:38:22.855437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:55.308 [2024-11-29 09:38:22.855682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:55.308 [2024-11-29 09:38:22.855750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:55.308 [2024-11-29 09:38:22.855773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.308 [2024-11-29 09:38:22.865689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:55.308 [2024-11-29 09:38:22.865858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:55.308 [2024-11-29 09:38:22.865911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:55.308 [2024-11-29 09:38:22.865935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.308 [2024-11-29 09:38:22.866009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:55.308 [2024-11-29 09:38:22.866033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:55.308 [2024-11-29 09:38:22.866053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:55.308 [2024-11-29 09:38:22.866072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.308 [2024-11-29 09:38:22.866138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:55.308 [2024-11-29 09:38:22.866243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:55.308 [2024-11-29 09:38:22.866260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:55.308 [2024-11-29 09:38:22.866268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.308 [2024-11-29 09:38:22.866344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:55.308 [2024-11-29 09:38:22.866354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:55.308 [2024-11-29 09:38:22.866363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:55.308 [2024-11-29 09:38:22.866371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.308 [2024-11-29 09:38:22.866406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:55.308 [2024-11-29 09:38:22.866416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:55.308 [2024-11-29 09:38:22.866429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:55.308 [2024-11-29 09:38:22.866436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.308 [2024-11-29 09:38:22.866473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:55.308 [2024-11-29 09:38:22.866483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:55.308 [2024-11-29 09:38:22.866491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:55.308 [2024-11-29 09:38:22.866499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.308 [2024-11-29 09:38:22.866540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:55.308 [2024-11-29 09:38:22.866551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:55.308 [2024-11-29 09:38:22.866562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:55.308 [2024-11-29 09:38:22.866570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.308 [2024-11-29 09:38:22.866727] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 85.144 ms, result 0 00:20:55.877 00:20:55.877 00:20:55.877 09:38:23 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:20:55.877 [2024-11-29 09:38:23.448071] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:20:55.877 [2024-11-29 09:38:23.448211] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90820 ] 00:20:55.877 [2024-11-29 09:38:23.592561] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:56.138 [2024-11-29 09:38:23.623949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.138 [2024-11-29 09:38:23.652640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.138 [2024-11-29 09:38:23.763328] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:56.138 [2024-11-29 09:38:23.763408] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:56.401 [2024-11-29 09:38:23.925085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.402 [2024-11-29 09:38:23.925330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:56.402 [2024-11-29 09:38:23.925357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:56.402 [2024-11-29 09:38:23.925374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.402 [2024-11-29 09:38:23.925455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.402 [2024-11-29 09:38:23.925467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:56.402 [2024-11-29 09:38:23.925477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:56.402 [2024-11-29 09:38:23.925489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.402 [2024-11-29 09:38:23.925512] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:56.402 [2024-11-29 09:38:23.925827] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:56.402 [2024-11-29 09:38:23.925847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.402 [2024-11-29 09:38:23.925858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:56.402 [2024-11-29 09:38:23.925871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.341 ms 00:20:56.402 [2024-11-29 09:38:23.925879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.402 [2024-11-29 09:38:23.927564] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:56.402 [2024-11-29 09:38:23.931516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.402 [2024-11-29 09:38:23.931573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:56.402 [2024-11-29 09:38:23.931610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.954 ms 00:20:56.402 [2024-11-29 09:38:23.931618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.402 [2024-11-29 09:38:23.931717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.402 [2024-11-29 09:38:23.931733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:56.402 [2024-11-29 09:38:23.931745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:20:56.402 [2024-11-29 09:38:23.931754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.402 [2024-11-29 09:38:23.940100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.402 [2024-11-29 09:38:23.940146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:56.402 [2024-11-29 09:38:23.940157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.300 ms 00:20:56.402 [2024-11-29 09:38:23.940165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.402 [2024-11-29 09:38:23.940264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.402 [2024-11-29 09:38:23.940274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:56.402 [2024-11-29 09:38:23.940285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:20:56.402 [2024-11-29 09:38:23.940293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.402 [2024-11-29 09:38:23.940357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.402 [2024-11-29 09:38:23.940367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:56.402 [2024-11-29 09:38:23.940380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:56.402 [2024-11-29 09:38:23.940387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.402 [2024-11-29 09:38:23.940414] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:56.402 [2024-11-29 09:38:23.942502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.402 [2024-11-29 09:38:23.942543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:56.402 [2024-11-29 09:38:23.942554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.098 ms 00:20:56.402 [2024-11-29 09:38:23.942562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.402 [2024-11-29 09:38:23.942611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.402 [2024-11-29 09:38:23.942622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:56.402 [2024-11-29 09:38:23.942637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:56.402 [2024-11-29 09:38:23.942647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.402 [2024-11-29 09:38:23.942670] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:56.402 [2024-11-29 09:38:23.942691] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:56.402 [2024-11-29 09:38:23.942734] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:56.402 [2024-11-29 09:38:23.942750] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:56.402 [2024-11-29 09:38:23.942854] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:56.402 [2024-11-29 09:38:23.942868] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:56.402 [2024-11-29 09:38:23.942879] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:56.402 [2024-11-29 09:38:23.942889] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:56.402 [2024-11-29 09:38:23.942899] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:56.402 [2024-11-29 09:38:23.942907] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:56.402 [2024-11-29 09:38:23.942915] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:56.402 [2024-11-29 09:38:23.942922] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:56.402 [2024-11-29 09:38:23.942930] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:56.402 [2024-11-29 09:38:23.942938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.402 [2024-11-29 09:38:23.942946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:56.402 [2024-11-29 09:38:23.942957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:20:56.402 [2024-11-29 09:38:23.942966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.402 [2024-11-29 09:38:23.943049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.402 [2024-11-29 09:38:23.943058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:56.402 [2024-11-29 09:38:23.943066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:56.402 [2024-11-29 09:38:23.943073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.402 [2024-11-29 09:38:23.943171] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:56.402 [2024-11-29 09:38:23.943187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:56.402 [2024-11-29 09:38:23.943197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:56.402 [2024-11-29 09:38:23.943205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:56.402 [2024-11-29 09:38:23.943217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:56.402 [2024-11-29 09:38:23.943225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:56.402 [2024-11-29 09:38:23.943238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:56.402 [2024-11-29 09:38:23.943248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:56.402 [2024-11-29 09:38:23.943257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:56.402 [2024-11-29 09:38:23.943267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:56.402 [2024-11-29 09:38:23.943275] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:56.402 [2024-11-29 09:38:23.943283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:56.402 [2024-11-29 09:38:23.943290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:56.402 [2024-11-29 09:38:23.943298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:56.402 [2024-11-29 09:38:23.943307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:56.402 [2024-11-29 09:38:23.943314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:56.402 [2024-11-29 09:38:23.943325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:56.402 [2024-11-29 09:38:23.943333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:56.402 [2024-11-29 09:38:23.943341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:56.402 [2024-11-29 09:38:23.943349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:56.402 [2024-11-29 09:38:23.943357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:56.402 [2024-11-29 09:38:23.943365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:56.402 [2024-11-29 09:38:23.943373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:56.402 [2024-11-29 09:38:23.943380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:56.402 [2024-11-29 09:38:23.943389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:56.402 [2024-11-29 09:38:23.943401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:56.402 [2024-11-29 09:38:23.943409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:56.402 [2024-11-29 09:38:23.943416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:56.402 [2024-11-29 09:38:23.943424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:56.402 [2024-11-29 09:38:23.943431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:56.402 [2024-11-29 09:38:23.943438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:56.402 [2024-11-29 09:38:23.943446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:56.402 [2024-11-29 09:38:23.943454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:56.402 [2024-11-29 09:38:23.943461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:56.402 [2024-11-29 09:38:23.943469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:56.402 [2024-11-29 09:38:23.943476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:56.402 [2024-11-29 09:38:23.943483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:56.402 [2024-11-29 09:38:23.943491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:56.402 [2024-11-29 09:38:23.943499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:56.402 [2024-11-29 09:38:23.943507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:56.402 [2024-11-29 09:38:23.943514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:56.403 [2024-11-29 09:38:23.943524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:56.403 [2024-11-29 09:38:23.943531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:56.403 [2024-11-29 09:38:23.943538] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:56.403 [2024-11-29 09:38:23.943546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:56.403 [2024-11-29 09:38:23.943553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:56.403 [2024-11-29 09:38:23.943560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:56.403 [2024-11-29 09:38:23.943567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:56.403 [2024-11-29 09:38:23.943577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:56.403 [2024-11-29 09:38:23.943838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:56.403 [2024-11-29 09:38:23.943880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:56.403 [2024-11-29 09:38:23.943901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:56.403 [2024-11-29 09:38:23.943920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:56.403 [2024-11-29 09:38:23.943941] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:56.403 [2024-11-29 09:38:23.943975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:56.403 [2024-11-29 09:38:23.944006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:56.403 [2024-11-29 09:38:23.944035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:56.403 [2024-11-29 09:38:23.944073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:56.403 [2024-11-29 09:38:23.944102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:56.403 [2024-11-29 09:38:23.944130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:56.403 [2024-11-29 09:38:23.944159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:56.403 [2024-11-29 09:38:23.944717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:56.403 [2024-11-29 09:38:23.944847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:56.403 [2024-11-29 09:38:23.944881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:56.403 [2024-11-29 09:38:23.944910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:56.403 [2024-11-29 09:38:23.944941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:56.403 [2024-11-29 09:38:23.944969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:56.403 [2024-11-29 09:38:23.945026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:56.403 [2024-11-29 09:38:23.945036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:56.403 [2024-11-29 09:38:23.945044] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:56.403 [2024-11-29 09:38:23.945054] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:56.403 [2024-11-29 09:38:23.945075] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:56.403 [2024-11-29 09:38:23.945083] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:56.403 [2024-11-29 09:38:23.945095] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:56.403 [2024-11-29 09:38:23.945104] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:56.403 [2024-11-29 09:38:23.945116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.403 [2024-11-29 09:38:23.945125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:56.403 [2024-11-29 09:38:23.945135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.013 ms 00:20:56.403 [2024-11-29 09:38:23.945147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.403 [2024-11-29 09:38:23.958713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.403 [2024-11-29 09:38:23.958870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:56.403 [2024-11-29 09:38:23.958888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.469 ms 00:20:56.403 [2024-11-29 09:38:23.958897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.403 [2024-11-29 09:38:23.958992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.403 [2024-11-29 09:38:23.959002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:56.403 [2024-11-29 09:38:23.959011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:20:56.403 [2024-11-29 09:38:23.959018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.403 [2024-11-29 09:38:23.987830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.403 [2024-11-29 09:38:23.987886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:56.403 [2024-11-29 09:38:23.987908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.741 ms 00:20:56.403 [2024-11-29 09:38:23.987917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.403 [2024-11-29 09:38:23.987968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.403 [2024-11-29 09:38:23.987979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:56.403 [2024-11-29 09:38:23.987992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:56.403 [2024-11-29 09:38:23.988003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.403 [2024-11-29 09:38:23.988512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.403 [2024-11-29 09:38:23.988567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:56.403 [2024-11-29 09:38:23.988579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.449 ms 00:20:56.403 [2024-11-29 09:38:23.988610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.403 [2024-11-29 09:38:23.988768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.403 [2024-11-29 09:38:23.988778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:56.403 [2024-11-29 09:38:23.988787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:20:56.403 [2024-11-29 09:38:23.988795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.403 [2024-11-29 09:38:23.996466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.403 [2024-11-29 09:38:23.996664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:56.403 [2024-11-29 09:38:23.996683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.619 ms 00:20:56.403 [2024-11-29 09:38:23.996704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.403 [2024-11-29 09:38:24.000521] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:56.403 [2024-11-29 09:38:24.000709] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:56.403 [2024-11-29 09:38:24.000734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.403 [2024-11-29 09:38:24.000743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:56.403 [2024-11-29 09:38:24.000752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.930 ms 00:20:56.403 [2024-11-29 09:38:24.000759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.403 [2024-11-29 09:38:24.016576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.403 [2024-11-29 09:38:24.016633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:56.403 [2024-11-29 09:38:24.016645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.772 ms 00:20:56.403 [2024-11-29 09:38:24.016653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.403 [2024-11-29 09:38:24.019838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.403 [2024-11-29 09:38:24.020007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:56.403 [2024-11-29 09:38:24.020024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.150 ms 00:20:56.403 [2024-11-29 09:38:24.020032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.403 [2024-11-29 09:38:24.023093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.403 [2024-11-29 09:38:24.023253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:56.403 [2024-11-29 09:38:24.023270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.025 ms 00:20:56.403 [2024-11-29 09:38:24.023288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.403 [2024-11-29 09:38:24.023633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.403 [2024-11-29 09:38:24.023649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:56.403 [2024-11-29 09:38:24.023669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:20:56.403 [2024-11-29 09:38:24.023680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.403 [2024-11-29 09:38:24.045528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.403 [2024-11-29 09:38:24.045630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:56.403 [2024-11-29 09:38:24.045645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.829 ms 00:20:56.403 [2024-11-29 09:38:24.045654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.403 [2024-11-29 09:38:24.053805] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:56.403 [2024-11-29 09:38:24.056719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.403 [2024-11-29 09:38:24.056880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:56.403 [2024-11-29 09:38:24.056905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.015 ms 00:20:56.403 [2024-11-29 09:38:24.056913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.403 [2024-11-29 09:38:24.056996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.403 [2024-11-29 09:38:24.057011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:56.403 [2024-11-29 09:38:24.057021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:56.403 [2024-11-29 09:38:24.057029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.403 [2024-11-29 09:38:24.057096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.404 [2024-11-29 09:38:24.057110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:56.404 [2024-11-29 09:38:24.057123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:20:56.404 [2024-11-29 09:38:24.057131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.404 [2024-11-29 09:38:24.057149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.404 [2024-11-29 09:38:24.057159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:56.404 [2024-11-29 09:38:24.057167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:56.404 [2024-11-29 09:38:24.057177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.404 [2024-11-29 09:38:24.057211] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:56.404 [2024-11-29 09:38:24.057222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.404 [2024-11-29 09:38:24.057237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:56.404 [2024-11-29 09:38:24.057245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:56.404 [2024-11-29 09:38:24.057253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.404 [2024-11-29 09:38:24.062362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.404 [2024-11-29 09:38:24.062407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:56.404 [2024-11-29 09:38:24.062417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.090 ms 00:20:56.404 [2024-11-29 09:38:24.062435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.404 [2024-11-29 09:38:24.062514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.404 [2024-11-29 09:38:24.062527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:56.404 [2024-11-29 09:38:24.062538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:20:56.404 [2024-11-29 09:38:24.062547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.404 [2024-11-29 09:38:24.064067] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 138.542 ms, result 0 00:20:57.792  [2024-11-29T09:38:26.463Z] Copying: 18/1024 [MB] (18 MBps) [2024-11-29T09:38:27.408Z] Copying: 30/1024 [MB] (11 MBps) [2024-11-29T09:38:28.352Z] Copying: 42/1024 [MB] (12 MBps) [2024-11-29T09:38:29.294Z] Copying: 60/1024 [MB] (17 MBps) [2024-11-29T09:38:30.681Z] Copying: 81/1024 [MB] (20 MBps) [2024-11-29T09:38:31.253Z] Copying: 105/1024 [MB] (23 MBps) [2024-11-29T09:38:32.666Z] Copying: 122/1024 [MB] (17 MBps) [2024-11-29T09:38:33.611Z] Copying: 142/1024 [MB] (20 MBps) [2024-11-29T09:38:34.558Z] Copying: 164/1024 [MB] (21 MBps) [2024-11-29T09:38:35.512Z] Copying: 187/1024 [MB] (23 MBps) [2024-11-29T09:38:36.455Z] Copying: 208/1024 [MB] (20 MBps) [2024-11-29T09:38:37.399Z] Copying: 229/1024 [MB] (21 MBps) [2024-11-29T09:38:38.344Z] Copying: 248/1024 [MB] (18 MBps) [2024-11-29T09:38:39.288Z] Copying: 269/1024 [MB] (20 MBps) [2024-11-29T09:38:40.675Z] Copying: 293/1024 [MB] (24 MBps) [2024-11-29T09:38:41.248Z] Copying: 306/1024 [MB] (12 MBps) [2024-11-29T09:38:42.638Z] Copying: 317/1024 [MB] (11 MBps) [2024-11-29T09:38:43.583Z] Copying: 328/1024 [MB] (11 MBps) [2024-11-29T09:38:44.528Z] Copying: 340/1024 [MB] (11 MBps) [2024-11-29T09:38:45.499Z] Copying: 351/1024 [MB] (11 MBps) [2024-11-29T09:38:46.440Z] Copying: 361/1024 [MB] (10 MBps) [2024-11-29T09:38:47.383Z] Copying: 372/1024 [MB] (10 MBps) [2024-11-29T09:38:48.368Z] Copying: 382/1024 [MB] (10 MBps) [2024-11-29T09:38:49.305Z] Copying: 401848/1048576 [kB] (9828 kBps) [2024-11-29T09:38:50.691Z] Copying: 402/1024 [MB] (10 MBps) [2024-11-29T09:38:51.263Z] Copying: 414/1024 [MB] (11 MBps) [2024-11-29T09:38:52.652Z] Copying: 424/1024 [MB] (10 MBps) [2024-11-29T09:38:53.598Z] Copying: 434/1024 [MB] (10 MBps) [2024-11-29T09:38:54.544Z] Copying: 455264/1048576 [kB] (9892 kBps) [2024-11-29T09:38:55.490Z] Copying: 465104/1048576 [kB] (9840 kBps) [2024-11-29T09:38:56.438Z] Copying: 474708/1048576 [kB] (9604 kBps) [2024-11-29T09:38:57.384Z] Copying: 484192/1048576 [kB] (9484 kBps) [2024-11-29T09:38:58.330Z] Copying: 493984/1048576 [kB] (9792 kBps) [2024-11-29T09:38:59.278Z] Copying: 492/1024 [MB] (10 MBps) [2024-11-29T09:39:00.362Z] Copying: 502/1024 [MB] (10 MBps) [2024-11-29T09:39:01.305Z] Copying: 513/1024 [MB] (10 MBps) [2024-11-29T09:39:02.690Z] Copying: 523/1024 [MB] (10 MBps) [2024-11-29T09:39:03.263Z] Copying: 545360/1048576 [kB] (9720 kBps) [2024-11-29T09:39:04.651Z] Copying: 555388/1048576 [kB] (10028 kBps) [2024-11-29T09:39:05.597Z] Copying: 565272/1048576 [kB] (9884 kBps) [2024-11-29T09:39:06.543Z] Copying: 575120/1048576 [kB] (9848 kBps) [2024-11-29T09:39:07.486Z] Copying: 584636/1048576 [kB] (9516 kBps) [2024-11-29T09:39:08.430Z] Copying: 581/1024 [MB] (10 MBps) [2024-11-29T09:39:09.376Z] Copying: 605000/1048576 [kB] (10024 kBps) [2024-11-29T09:39:10.321Z] Copying: 614960/1048576 [kB] (9960 kBps) [2024-11-29T09:39:11.265Z] Copying: 610/1024 [MB] (10 MBps) [2024-11-29T09:39:12.651Z] Copying: 621/1024 [MB] (10 MBps) [2024-11-29T09:39:13.594Z] Copying: 631/1024 [MB] (10 MBps) [2024-11-29T09:39:14.539Z] Copying: 642/1024 [MB] (11 MBps) [2024-11-29T09:39:15.483Z] Copying: 653/1024 [MB] (11 MBps) [2024-11-29T09:39:16.427Z] Copying: 664/1024 [MB] (10 MBps) [2024-11-29T09:39:17.371Z] Copying: 674/1024 [MB] (10 MBps) [2024-11-29T09:39:18.316Z] Copying: 685/1024 [MB] (10 MBps) [2024-11-29T09:39:19.262Z] Copying: 696/1024 [MB] (10 MBps) [2024-11-29T09:39:20.649Z] Copying: 706/1024 [MB] (10 MBps) [2024-11-29T09:39:21.667Z] Copying: 729/1024 [MB] (22 MBps) [2024-11-29T09:39:22.273Z] Copying: 748/1024 [MB] (18 MBps) [2024-11-29T09:39:23.660Z] Copying: 762/1024 [MB] (14 MBps) [2024-11-29T09:39:24.605Z] Copying: 778/1024 [MB] (15 MBps) [2024-11-29T09:39:25.551Z] Copying: 795/1024 [MB] (17 MBps) [2024-11-29T09:39:26.493Z] Copying: 815/1024 [MB] (19 MBps) [2024-11-29T09:39:27.435Z] Copying: 832/1024 [MB] (17 MBps) [2024-11-29T09:39:28.376Z] Copying: 853/1024 [MB] (20 MBps) [2024-11-29T09:39:29.321Z] Copying: 870/1024 [MB] (16 MBps) [2024-11-29T09:39:30.266Z] Copying: 882/1024 [MB] (11 MBps) [2024-11-29T09:39:31.656Z] Copying: 898/1024 [MB] (16 MBps) [2024-11-29T09:39:32.601Z] Copying: 930580/1048576 [kB] (10228 kBps) [2024-11-29T09:39:33.546Z] Copying: 921/1024 [MB] (13 MBps) [2024-11-29T09:39:34.489Z] Copying: 935/1024 [MB] (13 MBps) [2024-11-29T09:39:35.431Z] Copying: 953/1024 [MB] (17 MBps) [2024-11-29T09:39:36.372Z] Copying: 969/1024 [MB] (16 MBps) [2024-11-29T09:39:37.314Z] Copying: 989/1024 [MB] (20 MBps) [2024-11-29T09:39:37.883Z] Copying: 1010/1024 [MB] (20 MBps) [2024-11-29T09:39:38.144Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-11-29 09:39:38.136745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.418 [2024-11-29 09:39:38.136825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:10.418 [2024-11-29 09:39:38.136847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:10.418 [2024-11-29 09:39:38.136856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.418 [2024-11-29 09:39:38.136881] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:10.418 [2024-11-29 09:39:38.137875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.418 [2024-11-29 09:39:38.137910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:10.418 [2024-11-29 09:39:38.137923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.975 ms 00:22:10.418 [2024-11-29 09:39:38.137933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.418 [2024-11-29 09:39:38.138187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.418 [2024-11-29 09:39:38.138259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:10.418 [2024-11-29 09:39:38.138276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.223 ms 00:22:10.418 [2024-11-29 09:39:38.138285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.679 [2024-11-29 09:39:38.142559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.679 [2024-11-29 09:39:38.142746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:10.679 [2024-11-29 09:39:38.142766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.254 ms 00:22:10.679 [2024-11-29 09:39:38.142775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.679 [2024-11-29 09:39:38.149797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.679 [2024-11-29 09:39:38.149830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:10.679 [2024-11-29 09:39:38.149841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.992 ms 00:22:10.679 [2024-11-29 09:39:38.149858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.679 [2024-11-29 09:39:38.153177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.679 [2024-11-29 09:39:38.153340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:10.679 [2024-11-29 09:39:38.153402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.249 ms 00:22:10.679 [2024-11-29 09:39:38.153426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.679 [2024-11-29 09:39:38.158555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.679 [2024-11-29 09:39:38.158756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:10.679 [2024-11-29 09:39:38.158825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.000 ms 00:22:10.679 [2024-11-29 09:39:38.158851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.679 [2024-11-29 09:39:38.158985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.679 [2024-11-29 09:39:38.159138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:10.679 [2024-11-29 09:39:38.159163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:22:10.679 [2024-11-29 09:39:38.159191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.679 [2024-11-29 09:39:38.161920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.680 [2024-11-29 09:39:38.162539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:10.680 [2024-11-29 09:39:38.162576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.688 ms 00:22:10.680 [2024-11-29 09:39:38.162613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.680 [2024-11-29 09:39:38.165112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.680 [2024-11-29 09:39:38.165331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:10.680 [2024-11-29 09:39:38.165449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.432 ms 00:22:10.680 [2024-11-29 09:39:38.165500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.680 [2024-11-29 09:39:38.167678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.680 [2024-11-29 09:39:38.167890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:10.680 [2024-11-29 09:39:38.168004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.092 ms 00:22:10.680 [2024-11-29 09:39:38.168052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.680 [2024-11-29 09:39:38.170166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.680 [2024-11-29 09:39:38.170381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:10.680 [2024-11-29 09:39:38.170499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.910 ms 00:22:10.680 [2024-11-29 09:39:38.170525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.680 [2024-11-29 09:39:38.170582] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:10.680 [2024-11-29 09:39:38.170662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.170685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.170702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.170717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.170734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.170751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.170769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.170786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.170801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.170817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.170832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.170848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.170864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.170881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.170898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.170913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.170932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.170949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.170966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.170983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.170997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.171013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.171029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.171044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.171060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.171076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.171093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.171108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.171124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.171140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.171156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.171171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.171187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.171202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.171218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.171232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.171247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.171264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.171279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.171295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.171310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.171325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.171340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.171355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.171370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.171387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.171402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.171419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.171452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.171485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.171502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.171518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.171533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.171548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.171562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.171577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.171615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.171631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.171647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.171662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.171677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.171692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.171708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.171722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.171738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.171755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.171771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.171785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.171800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.171815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.171830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.171846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.171862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.171876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.171891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.171905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:10.680 [2024-11-29 09:39:38.171921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:10.681 [2024-11-29 09:39:38.171937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:10.681 [2024-11-29 09:39:38.171956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:10.681 [2024-11-29 09:39:38.171971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:10.681 [2024-11-29 09:39:38.171996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:10.681 [2024-11-29 09:39:38.172012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:10.681 [2024-11-29 09:39:38.172028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:10.681 [2024-11-29 09:39:38.172042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:10.681 [2024-11-29 09:39:38.172057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:10.681 [2024-11-29 09:39:38.172069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:10.681 [2024-11-29 09:39:38.172080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:10.681 [2024-11-29 09:39:38.172092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:10.681 [2024-11-29 09:39:38.172105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:10.681 [2024-11-29 09:39:38.172117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:10.681 [2024-11-29 09:39:38.173028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:10.681 [2024-11-29 09:39:38.173044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:10.681 [2024-11-29 09:39:38.173054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:10.681 [2024-11-29 09:39:38.173063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:10.681 [2024-11-29 09:39:38.173072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:10.681 [2024-11-29 09:39:38.173081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:10.681 [2024-11-29 09:39:38.173090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:10.681 [2024-11-29 09:39:38.173099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:10.681 [2024-11-29 09:39:38.173109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:10.681 [2024-11-29 09:39:38.173118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:10.681 [2024-11-29 09:39:38.173136] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:10.681 [2024-11-29 09:39:38.173154] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d7621d15-6b86-4809-9c43-c56efb3866c5 00:22:10.681 [2024-11-29 09:39:38.173165] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:10.681 [2024-11-29 09:39:38.173174] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:10.681 [2024-11-29 09:39:38.173182] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:10.681 [2024-11-29 09:39:38.173192] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:10.681 [2024-11-29 09:39:38.173200] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:10.681 [2024-11-29 09:39:38.173208] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:10.681 [2024-11-29 09:39:38.173223] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:10.681 [2024-11-29 09:39:38.173230] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:10.681 [2024-11-29 09:39:38.173237] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:10.681 [2024-11-29 09:39:38.173245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.681 [2024-11-29 09:39:38.173266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:10.681 [2024-11-29 09:39:38.173275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.666 ms 00:22:10.681 [2024-11-29 09:39:38.173283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.681 [2024-11-29 09:39:38.175723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.681 [2024-11-29 09:39:38.175753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:10.681 [2024-11-29 09:39:38.175764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.399 ms 00:22:10.681 [2024-11-29 09:39:38.175785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.681 [2024-11-29 09:39:38.175903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.681 [2024-11-29 09:39:38.175912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:10.681 [2024-11-29 09:39:38.175921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:22:10.681 [2024-11-29 09:39:38.175928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.681 [2024-11-29 09:39:38.183784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.681 [2024-11-29 09:39:38.183937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:10.681 [2024-11-29 09:39:38.183954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.681 [2024-11-29 09:39:38.183970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.681 [2024-11-29 09:39:38.184031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.681 [2024-11-29 09:39:38.184040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:10.681 [2024-11-29 09:39:38.184049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.681 [2024-11-29 09:39:38.184062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.681 [2024-11-29 09:39:38.184125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.681 [2024-11-29 09:39:38.184135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:10.681 [2024-11-29 09:39:38.184143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.681 [2024-11-29 09:39:38.184151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.681 [2024-11-29 09:39:38.184170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.681 [2024-11-29 09:39:38.184179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:10.681 [2024-11-29 09:39:38.184186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.681 [2024-11-29 09:39:38.184194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.681 [2024-11-29 09:39:38.197217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.681 [2024-11-29 09:39:38.197252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:10.681 [2024-11-29 09:39:38.197262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.681 [2024-11-29 09:39:38.197275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.681 [2024-11-29 09:39:38.207365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.681 [2024-11-29 09:39:38.207416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:10.681 [2024-11-29 09:39:38.207427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.681 [2024-11-29 09:39:38.207435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.681 [2024-11-29 09:39:38.207484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.681 [2024-11-29 09:39:38.207494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:10.681 [2024-11-29 09:39:38.207503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.681 [2024-11-29 09:39:38.207511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.681 [2024-11-29 09:39:38.207544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.681 [2024-11-29 09:39:38.207561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:10.681 [2024-11-29 09:39:38.207569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.681 [2024-11-29 09:39:38.207577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.681 [2024-11-29 09:39:38.207677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.681 [2024-11-29 09:39:38.207687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:10.681 [2024-11-29 09:39:38.207702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.681 [2024-11-29 09:39:38.207710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.681 [2024-11-29 09:39:38.207737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.681 [2024-11-29 09:39:38.207750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:10.681 [2024-11-29 09:39:38.207758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.681 [2024-11-29 09:39:38.207766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.681 [2024-11-29 09:39:38.207804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.681 [2024-11-29 09:39:38.207813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:10.681 [2024-11-29 09:39:38.207821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.681 [2024-11-29 09:39:38.207830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.681 [2024-11-29 09:39:38.207872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.681 [2024-11-29 09:39:38.207886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:10.681 [2024-11-29 09:39:38.207894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.681 [2024-11-29 09:39:38.207902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.681 [2024-11-29 09:39:38.208027] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 71.272 ms, result 0 00:22:10.942 00:22:10.942 00:22:10.942 09:39:38 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:13.493 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:13.493 09:39:40 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:22:13.493 [2024-11-29 09:39:40.730815] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:22:13.493 [2024-11-29 09:39:40.730929] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91614 ] 00:22:13.493 [2024-11-29 09:39:40.863347] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:13.493 [2024-11-29 09:39:40.893136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:13.493 [2024-11-29 09:39:40.912257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:13.493 [2024-11-29 09:39:41.000741] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:13.493 [2024-11-29 09:39:41.000805] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:13.493 [2024-11-29 09:39:41.158084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.493 [2024-11-29 09:39:41.158128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:13.493 [2024-11-29 09:39:41.158142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:13.493 [2024-11-29 09:39:41.158149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.493 [2024-11-29 09:39:41.158203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.493 [2024-11-29 09:39:41.158213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:13.493 [2024-11-29 09:39:41.158222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:22:13.493 [2024-11-29 09:39:41.158231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.493 [2024-11-29 09:39:41.158252] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:13.493 [2024-11-29 09:39:41.158489] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:13.493 [2024-11-29 09:39:41.158507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.493 [2024-11-29 09:39:41.158517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:13.493 [2024-11-29 09:39:41.158525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:22:13.493 [2024-11-29 09:39:41.158532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.493 [2024-11-29 09:39:41.159702] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:13.493 [2024-11-29 09:39:41.162652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.493 [2024-11-29 09:39:41.162687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:13.493 [2024-11-29 09:39:41.162705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.952 ms 00:22:13.493 [2024-11-29 09:39:41.162713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.493 [2024-11-29 09:39:41.162781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.493 [2024-11-29 09:39:41.162791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:13.493 [2024-11-29 09:39:41.162800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:22:13.493 [2024-11-29 09:39:41.162808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.493 [2024-11-29 09:39:41.168131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.493 [2024-11-29 09:39:41.168163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:13.493 [2024-11-29 09:39:41.168174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.261 ms 00:22:13.493 [2024-11-29 09:39:41.168182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.493 [2024-11-29 09:39:41.168264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.493 [2024-11-29 09:39:41.168273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:13.493 [2024-11-29 09:39:41.168283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:22:13.493 [2024-11-29 09:39:41.168291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.493 [2024-11-29 09:39:41.168331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.493 [2024-11-29 09:39:41.168340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:13.493 [2024-11-29 09:39:41.168352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:13.493 [2024-11-29 09:39:41.168358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.493 [2024-11-29 09:39:41.168380] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:13.493 [2024-11-29 09:39:41.169805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.493 [2024-11-29 09:39:41.169938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:13.493 [2024-11-29 09:39:41.169961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.430 ms 00:22:13.493 [2024-11-29 09:39:41.169975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.493 [2024-11-29 09:39:41.170007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.494 [2024-11-29 09:39:41.170019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:13.494 [2024-11-29 09:39:41.170030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:13.494 [2024-11-29 09:39:41.170038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.494 [2024-11-29 09:39:41.170061] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:13.494 [2024-11-29 09:39:41.170080] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:13.494 [2024-11-29 09:39:41.170120] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:13.494 [2024-11-29 09:39:41.170139] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:13.494 [2024-11-29 09:39:41.170242] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:13.494 [2024-11-29 09:39:41.170256] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:13.494 [2024-11-29 09:39:41.170267] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:13.494 [2024-11-29 09:39:41.170281] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:13.494 [2024-11-29 09:39:41.170291] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:13.494 [2024-11-29 09:39:41.170300] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:13.494 [2024-11-29 09:39:41.170308] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:13.494 [2024-11-29 09:39:41.170319] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:13.494 [2024-11-29 09:39:41.170327] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:13.494 [2024-11-29 09:39:41.170336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.494 [2024-11-29 09:39:41.170344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:13.494 [2024-11-29 09:39:41.170353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:22:13.494 [2024-11-29 09:39:41.170365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.494 [2024-11-29 09:39:41.170448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.494 [2024-11-29 09:39:41.170457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:13.494 [2024-11-29 09:39:41.170470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:13.494 [2024-11-29 09:39:41.170478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.494 [2024-11-29 09:39:41.170573] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:13.494 [2024-11-29 09:39:41.170601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:13.494 [2024-11-29 09:39:41.170610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:13.494 [2024-11-29 09:39:41.170623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:13.494 [2024-11-29 09:39:41.170634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:13.494 [2024-11-29 09:39:41.170642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:13.494 [2024-11-29 09:39:41.170654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:13.494 [2024-11-29 09:39:41.170663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:13.494 [2024-11-29 09:39:41.170670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:13.494 [2024-11-29 09:39:41.170677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:13.494 [2024-11-29 09:39:41.170687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:13.494 [2024-11-29 09:39:41.170695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:13.494 [2024-11-29 09:39:41.170702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:13.494 [2024-11-29 09:39:41.170710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:13.494 [2024-11-29 09:39:41.170717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:13.494 [2024-11-29 09:39:41.170725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:13.494 [2024-11-29 09:39:41.170732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:13.494 [2024-11-29 09:39:41.170740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:13.494 [2024-11-29 09:39:41.170747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:13.494 [2024-11-29 09:39:41.170754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:13.494 [2024-11-29 09:39:41.170762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:13.494 [2024-11-29 09:39:41.170769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:13.494 [2024-11-29 09:39:41.170776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:13.494 [2024-11-29 09:39:41.170784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:13.494 [2024-11-29 09:39:41.170791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:13.494 [2024-11-29 09:39:41.170798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:13.494 [2024-11-29 09:39:41.170812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:13.494 [2024-11-29 09:39:41.170821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:13.494 [2024-11-29 09:39:41.170829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:13.494 [2024-11-29 09:39:41.170836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:13.494 [2024-11-29 09:39:41.170844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:13.494 [2024-11-29 09:39:41.170851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:13.494 [2024-11-29 09:39:41.170858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:13.494 [2024-11-29 09:39:41.170866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:13.494 [2024-11-29 09:39:41.170873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:13.494 [2024-11-29 09:39:41.170881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:13.494 [2024-11-29 09:39:41.170888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:13.494 [2024-11-29 09:39:41.170896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:13.494 [2024-11-29 09:39:41.170903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:13.494 [2024-11-29 09:39:41.170911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:13.494 [2024-11-29 09:39:41.170918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:13.494 [2024-11-29 09:39:41.170925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:13.494 [2024-11-29 09:39:41.170935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:13.494 [2024-11-29 09:39:41.170942] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:13.494 [2024-11-29 09:39:41.170951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:13.494 [2024-11-29 09:39:41.170959] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:13.494 [2024-11-29 09:39:41.170970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:13.494 [2024-11-29 09:39:41.170978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:13.494 [2024-11-29 09:39:41.170986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:13.494 [2024-11-29 09:39:41.170994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:13.494 [2024-11-29 09:39:41.171002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:13.494 [2024-11-29 09:39:41.171009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:13.494 [2024-11-29 09:39:41.171017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:13.494 [2024-11-29 09:39:41.171026] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:13.494 [2024-11-29 09:39:41.171035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:13.494 [2024-11-29 09:39:41.171044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:13.494 [2024-11-29 09:39:41.171052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:13.494 [2024-11-29 09:39:41.171060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:13.494 [2024-11-29 09:39:41.171071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:13.494 [2024-11-29 09:39:41.171079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:13.494 [2024-11-29 09:39:41.171088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:13.494 [2024-11-29 09:39:41.171096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:13.494 [2024-11-29 09:39:41.171103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:13.494 [2024-11-29 09:39:41.171111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:13.494 [2024-11-29 09:39:41.171119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:13.494 [2024-11-29 09:39:41.171128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:13.494 [2024-11-29 09:39:41.171136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:13.494 [2024-11-29 09:39:41.171144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:13.494 [2024-11-29 09:39:41.171152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:13.494 [2024-11-29 09:39:41.171160] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:13.494 [2024-11-29 09:39:41.171169] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:13.494 [2024-11-29 09:39:41.171178] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:13.494 [2024-11-29 09:39:41.171186] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:13.494 [2024-11-29 09:39:41.171194] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:13.495 [2024-11-29 09:39:41.171204] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:13.495 [2024-11-29 09:39:41.171212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.495 [2024-11-29 09:39:41.171221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:13.495 [2024-11-29 09:39:41.171229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.707 ms 00:22:13.495 [2024-11-29 09:39:41.171239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.495 [2024-11-29 09:39:41.180817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.495 [2024-11-29 09:39:41.180946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:13.495 [2024-11-29 09:39:41.180963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.531 ms 00:22:13.495 [2024-11-29 09:39:41.180980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.495 [2024-11-29 09:39:41.181060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.495 [2024-11-29 09:39:41.181068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:13.495 [2024-11-29 09:39:41.181076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:22:13.495 [2024-11-29 09:39:41.181083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.495 [2024-11-29 09:39:41.198682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.495 [2024-11-29 09:39:41.198733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:13.495 [2024-11-29 09:39:41.198751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.544 ms 00:22:13.495 [2024-11-29 09:39:41.198763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.495 [2024-11-29 09:39:41.198817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.495 [2024-11-29 09:39:41.198840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:13.495 [2024-11-29 09:39:41.198852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:13.495 [2024-11-29 09:39:41.198871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.495 [2024-11-29 09:39:41.199300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.495 [2024-11-29 09:39:41.199324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:13.495 [2024-11-29 09:39:41.199339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.357 ms 00:22:13.495 [2024-11-29 09:39:41.199351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.495 [2024-11-29 09:39:41.199536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.495 [2024-11-29 09:39:41.199550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:13.495 [2024-11-29 09:39:41.199564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:22:13.495 [2024-11-29 09:39:41.199622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.495 [2024-11-29 09:39:41.206112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.495 [2024-11-29 09:39:41.206156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:13.495 [2024-11-29 09:39:41.206170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.455 ms 00:22:13.495 [2024-11-29 09:39:41.206192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.495 [2024-11-29 09:39:41.209467] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:13.495 [2024-11-29 09:39:41.209694] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:13.495 [2024-11-29 09:39:41.209721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.495 [2024-11-29 09:39:41.209733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:13.495 [2024-11-29 09:39:41.209745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.431 ms 00:22:13.495 [2024-11-29 09:39:41.209755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.755 [2024-11-29 09:39:41.229594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.755 [2024-11-29 09:39:41.229642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:13.755 [2024-11-29 09:39:41.229653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.756 ms 00:22:13.755 [2024-11-29 09:39:41.229661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.755 [2024-11-29 09:39:41.232035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.755 [2024-11-29 09:39:41.232069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:13.755 [2024-11-29 09:39:41.232078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.328 ms 00:22:13.755 [2024-11-29 09:39:41.232085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.755 [2024-11-29 09:39:41.234222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.755 [2024-11-29 09:39:41.234255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:13.755 [2024-11-29 09:39:41.234264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.102 ms 00:22:13.755 [2024-11-29 09:39:41.234271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.755 [2024-11-29 09:39:41.234614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.755 [2024-11-29 09:39:41.234632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:13.755 [2024-11-29 09:39:41.234641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:22:13.755 [2024-11-29 09:39:41.234655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.755 [2024-11-29 09:39:41.251687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.755 [2024-11-29 09:39:41.251739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:13.755 [2024-11-29 09:39:41.251752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.013 ms 00:22:13.755 [2024-11-29 09:39:41.251760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.755 [2024-11-29 09:39:41.259451] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:13.755 [2024-11-29 09:39:41.262071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.755 [2024-11-29 09:39:41.262106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:13.755 [2024-11-29 09:39:41.262117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.260 ms 00:22:13.755 [2024-11-29 09:39:41.262132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.755 [2024-11-29 09:39:41.262219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.755 [2024-11-29 09:39:41.262230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:13.755 [2024-11-29 09:39:41.262239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:13.755 [2024-11-29 09:39:41.262247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.755 [2024-11-29 09:39:41.262312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.755 [2024-11-29 09:39:41.262326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:13.755 [2024-11-29 09:39:41.262334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:22:13.755 [2024-11-29 09:39:41.262347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.755 [2024-11-29 09:39:41.262366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.755 [2024-11-29 09:39:41.262374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:13.755 [2024-11-29 09:39:41.262382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:13.755 [2024-11-29 09:39:41.262389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.756 [2024-11-29 09:39:41.262423] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:13.756 [2024-11-29 09:39:41.262437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.756 [2024-11-29 09:39:41.262447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:13.756 [2024-11-29 09:39:41.262454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:22:13.756 [2024-11-29 09:39:41.262461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.756 [2024-11-29 09:39:41.266875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.756 [2024-11-29 09:39:41.266912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:13.756 [2024-11-29 09:39:41.266922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.393 ms 00:22:13.756 [2024-11-29 09:39:41.266929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.756 [2024-11-29 09:39:41.267001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.756 [2024-11-29 09:39:41.267011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:13.756 [2024-11-29 09:39:41.267024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:22:13.756 [2024-11-29 09:39:41.267032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.756 [2024-11-29 09:39:41.268044] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 109.474 ms, result 0 00:22:14.700  [2024-11-29T09:39:43.371Z] Copying: 11/1024 [MB] (11 MBps) [2024-11-29T09:39:44.319Z] Copying: 24/1024 [MB] (12 MBps) [2024-11-29T09:39:45.706Z] Copying: 35/1024 [MB] (11 MBps) [2024-11-29T09:39:46.651Z] Copying: 48/1024 [MB] (13 MBps) [2024-11-29T09:39:47.595Z] Copying: 66/1024 [MB] (18 MBps) [2024-11-29T09:39:48.541Z] Copying: 80/1024 [MB] (13 MBps) [2024-11-29T09:39:49.486Z] Copying: 92256/1048576 [kB] (10152 kBps) [2024-11-29T09:39:50.429Z] Copying: 102240/1048576 [kB] (9984 kBps) [2024-11-29T09:39:51.374Z] Copying: 111904/1048576 [kB] (9664 kBps) [2024-11-29T09:39:52.312Z] Copying: 121324/1048576 [kB] (9420 kBps) [2024-11-29T09:39:53.691Z] Copying: 149/1024 [MB] (31 MBps) [2024-11-29T09:39:54.637Z] Copying: 194/1024 [MB] (45 MBps) [2024-11-29T09:39:55.582Z] Copying: 215/1024 [MB] (20 MBps) [2024-11-29T09:39:56.529Z] Copying: 226/1024 [MB] (11 MBps) [2024-11-29T09:39:57.474Z] Copying: 238/1024 [MB] (11 MBps) [2024-11-29T09:39:58.418Z] Copying: 248/1024 [MB] (10 MBps) [2024-11-29T09:39:59.363Z] Copying: 261/1024 [MB] (12 MBps) [2024-11-29T09:40:00.310Z] Copying: 276588/1048576 [kB] (8780 kBps) [2024-11-29T09:40:01.692Z] Copying: 284/1024 [MB] (14 MBps) [2024-11-29T09:40:02.634Z] Copying: 295/1024 [MB] (10 MBps) [2024-11-29T09:40:03.575Z] Copying: 307/1024 [MB] (12 MBps) [2024-11-29T09:40:04.529Z] Copying: 324/1024 [MB] (17 MBps) [2024-11-29T09:40:05.472Z] Copying: 343/1024 [MB] (18 MBps) [2024-11-29T09:40:06.413Z] Copying: 361/1024 [MB] (18 MBps) [2024-11-29T09:40:07.356Z] Copying: 377/1024 [MB] (15 MBps) [2024-11-29T09:40:08.299Z] Copying: 391/1024 [MB] (14 MBps) [2024-11-29T09:40:09.682Z] Copying: 411/1024 [MB] (20 MBps) [2024-11-29T09:40:10.657Z] Copying: 430/1024 [MB] (19 MBps) [2024-11-29T09:40:11.626Z] Copying: 442/1024 [MB] (11 MBps) [2024-11-29T09:40:12.568Z] Copying: 462632/1048576 [kB] (9596 kBps) [2024-11-29T09:40:13.506Z] Copying: 472584/1048576 [kB] (9952 kBps) [2024-11-29T09:40:14.449Z] Copying: 475/1024 [MB] (14 MBps) [2024-11-29T09:40:15.389Z] Copying: 491/1024 [MB] (15 MBps) [2024-11-29T09:40:16.334Z] Copying: 510/1024 [MB] (18 MBps) [2024-11-29T09:40:17.722Z] Copying: 526/1024 [MB] (16 MBps) [2024-11-29T09:40:18.293Z] Copying: 545/1024 [MB] (18 MBps) [2024-11-29T09:40:19.680Z] Copying: 563/1024 [MB] (18 MBps) [2024-11-29T09:40:20.623Z] Copying: 580/1024 [MB] (16 MBps) [2024-11-29T09:40:21.565Z] Copying: 597/1024 [MB] (16 MBps) [2024-11-29T09:40:22.508Z] Copying: 611/1024 [MB] (14 MBps) [2024-11-29T09:40:23.452Z] Copying: 625/1024 [MB] (13 MBps) [2024-11-29T09:40:24.394Z] Copying: 645/1024 [MB] (19 MBps) [2024-11-29T09:40:25.346Z] Copying: 661/1024 [MB] (16 MBps) [2024-11-29T09:40:26.285Z] Copying: 676/1024 [MB] (14 MBps) [2024-11-29T09:40:27.672Z] Copying: 691/1024 [MB] (15 MBps) [2024-11-29T09:40:28.616Z] Copying: 711/1024 [MB] (20 MBps) [2024-11-29T09:40:29.560Z] Copying: 732/1024 [MB] (20 MBps) [2024-11-29T09:40:30.505Z] Copying: 746/1024 [MB] (13 MBps) [2024-11-29T09:40:31.452Z] Copying: 764/1024 [MB] (18 MBps) [2024-11-29T09:40:32.402Z] Copying: 779/1024 [MB] (15 MBps) [2024-11-29T09:40:33.348Z] Copying: 795/1024 [MB] (16 MBps) [2024-11-29T09:40:34.295Z] Copying: 825064/1048576 [kB] (10040 kBps) [2024-11-29T09:40:35.684Z] Copying: 835208/1048576 [kB] (10144 kBps) [2024-11-29T09:40:36.628Z] Copying: 845308/1048576 [kB] (10100 kBps) [2024-11-29T09:40:37.573Z] Copying: 855448/1048576 [kB] (10140 kBps) [2024-11-29T09:40:38.518Z] Copying: 845/1024 [MB] (10 MBps) [2024-11-29T09:40:39.463Z] Copying: 875580/1048576 [kB] (9852 kBps) [2024-11-29T09:40:40.405Z] Copying: 885728/1048576 [kB] (10148 kBps) [2024-11-29T09:40:41.351Z] Copying: 874/1024 [MB] (10 MBps) [2024-11-29T09:40:42.295Z] Copying: 906160/1048576 [kB] (10176 kBps) [2024-11-29T09:40:43.682Z] Copying: 895/1024 [MB] (10 MBps) [2024-11-29T09:40:44.627Z] Copying: 906/1024 [MB] (10 MBps) [2024-11-29T09:40:45.570Z] Copying: 938272/1048576 [kB] (10236 kBps) [2024-11-29T09:40:46.513Z] Copying: 926/1024 [MB] (10 MBps) [2024-11-29T09:40:47.458Z] Copying: 937/1024 [MB] (10 MBps) [2024-11-29T09:40:48.403Z] Copying: 948/1024 [MB] (11 MBps) [2024-11-29T09:40:49.348Z] Copying: 960/1024 [MB] (11 MBps) [2024-11-29T09:40:50.293Z] Copying: 971/1024 [MB] (11 MBps) [2024-11-29T09:40:51.680Z] Copying: 982/1024 [MB] (11 MBps) [2024-11-29T09:40:52.622Z] Copying: 994/1024 [MB] (11 MBps) [2024-11-29T09:40:53.562Z] Copying: 1005/1024 [MB] (11 MBps) [2024-11-29T09:40:54.500Z] Copying: 1017/1024 [MB] (11 MBps) [2024-11-29T09:40:55.081Z] Copying: 1048100/1048576 [kB] (6652 kBps) [2024-11-29T09:40:55.081Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-11-29 09:40:54.777142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.355 [2024-11-29 09:40:54.777200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:27.355 [2024-11-29 09:40:54.777222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:27.355 [2024-11-29 09:40:54.777234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.355 [2024-11-29 09:40:54.778239] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:27.355 [2024-11-29 09:40:54.780753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.355 [2024-11-29 09:40:54.780796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:27.355 [2024-11-29 09:40:54.780806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.479 ms 00:23:27.355 [2024-11-29 09:40:54.780814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.355 [2024-11-29 09:40:54.792886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.355 [2024-11-29 09:40:54.792921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:27.355 [2024-11-29 09:40:54.792932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.934 ms 00:23:27.355 [2024-11-29 09:40:54.792940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.355 [2024-11-29 09:40:54.810303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.355 [2024-11-29 09:40:54.810463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:27.355 [2024-11-29 09:40:54.810480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.347 ms 00:23:27.355 [2024-11-29 09:40:54.810496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.355 [2024-11-29 09:40:54.816654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.355 [2024-11-29 09:40:54.816761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:27.355 [2024-11-29 09:40:54.816775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.128 ms 00:23:27.355 [2024-11-29 09:40:54.816782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.355 [2024-11-29 09:40:54.817888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.355 [2024-11-29 09:40:54.817914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:27.355 [2024-11-29 09:40:54.817923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.062 ms 00:23:27.355 [2024-11-29 09:40:54.817930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.355 [2024-11-29 09:40:54.820954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.355 [2024-11-29 09:40:54.820985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:27.355 [2024-11-29 09:40:54.821000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.997 ms 00:23:27.355 [2024-11-29 09:40:54.821013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.355 [2024-11-29 09:40:54.864921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.355 [2024-11-29 09:40:54.864992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:27.355 [2024-11-29 09:40:54.865018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.871 ms 00:23:27.355 [2024-11-29 09:40:54.865028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.355 [2024-11-29 09:40:54.866781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.355 [2024-11-29 09:40:54.866825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:27.355 [2024-11-29 09:40:54.866834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.737 ms 00:23:27.355 [2024-11-29 09:40:54.866841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.355 [2024-11-29 09:40:54.867833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.355 [2024-11-29 09:40:54.867981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:27.355 [2024-11-29 09:40:54.867996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.963 ms 00:23:27.355 [2024-11-29 09:40:54.868003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.355 [2024-11-29 09:40:54.868811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.355 [2024-11-29 09:40:54.868836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:27.355 [2024-11-29 09:40:54.868845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.781 ms 00:23:27.355 [2024-11-29 09:40:54.868852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.355 [2024-11-29 09:40:54.869686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.355 [2024-11-29 09:40:54.869715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:27.355 [2024-11-29 09:40:54.869724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.785 ms 00:23:27.355 [2024-11-29 09:40:54.869731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.355 [2024-11-29 09:40:54.869757] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:27.355 [2024-11-29 09:40:54.869771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 110592 / 261120 wr_cnt: 1 state: open 00:23:27.355 [2024-11-29 09:40:54.869780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:27.355 [2024-11-29 09:40:54.869789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:27.355 [2024-11-29 09:40:54.869797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:27.355 [2024-11-29 09:40:54.869804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:27.355 [2024-11-29 09:40:54.869811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:27.355 [2024-11-29 09:40:54.869819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:27.355 [2024-11-29 09:40:54.869826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:27.355 [2024-11-29 09:40:54.869835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:27.355 [2024-11-29 09:40:54.869842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:27.355 [2024-11-29 09:40:54.869850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:27.355 [2024-11-29 09:40:54.869857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:27.355 [2024-11-29 09:40:54.869864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:27.355 [2024-11-29 09:40:54.869872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:27.355 [2024-11-29 09:40:54.869879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:27.355 [2024-11-29 09:40:54.869887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:27.355 [2024-11-29 09:40:54.869894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:27.355 [2024-11-29 09:40:54.869901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:27.355 [2024-11-29 09:40:54.869908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:27.355 [2024-11-29 09:40:54.869916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:27.355 [2024-11-29 09:40:54.869923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.869930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.869937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.869944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.869952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.869959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.869968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.869975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.869982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.869989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.869996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:27.356 [2024-11-29 09:40:54.870518] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:27.356 [2024-11-29 09:40:54.870525] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d7621d15-6b86-4809-9c43-c56efb3866c5 00:23:27.356 [2024-11-29 09:40:54.870538] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 110592 00:23:27.356 [2024-11-29 09:40:54.870552] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 111552 00:23:27.356 [2024-11-29 09:40:54.870559] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 110592 00:23:27.356 [2024-11-29 09:40:54.870567] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0087 00:23:27.356 [2024-11-29 09:40:54.870574] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:27.356 [2024-11-29 09:40:54.870581] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:27.356 [2024-11-29 09:40:54.870598] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:27.356 [2024-11-29 09:40:54.870605] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:27.356 [2024-11-29 09:40:54.870611] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:27.356 [2024-11-29 09:40:54.870618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.356 [2024-11-29 09:40:54.870630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:27.356 [2024-11-29 09:40:54.870638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.862 ms 00:23:27.357 [2024-11-29 09:40:54.870645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.357 [2024-11-29 09:40:54.872158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.357 [2024-11-29 09:40:54.872252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:27.357 [2024-11-29 09:40:54.872302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.499 ms 00:23:27.357 [2024-11-29 09:40:54.872324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.357 [2024-11-29 09:40:54.872489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.357 [2024-11-29 09:40:54.872554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:27.357 [2024-11-29 09:40:54.872609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:23:27.357 [2024-11-29 09:40:54.872637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.357 [2024-11-29 09:40:54.877125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:27.357 [2024-11-29 09:40:54.877234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:27.357 [2024-11-29 09:40:54.877283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:27.357 [2024-11-29 09:40:54.877305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.357 [2024-11-29 09:40:54.877395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:27.357 [2024-11-29 09:40:54.877419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:27.357 [2024-11-29 09:40:54.877473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:27.357 [2024-11-29 09:40:54.877498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.357 [2024-11-29 09:40:54.877553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:27.357 [2024-11-29 09:40:54.877619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:27.357 [2024-11-29 09:40:54.877652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:27.357 [2024-11-29 09:40:54.877671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.357 [2024-11-29 09:40:54.877722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:27.357 [2024-11-29 09:40:54.877785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:27.357 [2024-11-29 09:40:54.877834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:27.357 [2024-11-29 09:40:54.877856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.357 [2024-11-29 09:40:54.886345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:27.357 [2024-11-29 09:40:54.886510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:27.357 [2024-11-29 09:40:54.886557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:27.357 [2024-11-29 09:40:54.886598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.357 [2024-11-29 09:40:54.893289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:27.357 [2024-11-29 09:40:54.893440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:27.357 [2024-11-29 09:40:54.893488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:27.357 [2024-11-29 09:40:54.893516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.357 [2024-11-29 09:40:54.893611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:27.357 [2024-11-29 09:40:54.893654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:27.357 [2024-11-29 09:40:54.893780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:27.357 [2024-11-29 09:40:54.893857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.357 [2024-11-29 09:40:54.893898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:27.357 [2024-11-29 09:40:54.893942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:27.357 [2024-11-29 09:40:54.893964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:27.357 [2024-11-29 09:40:54.894004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.357 [2024-11-29 09:40:54.894096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:27.357 [2024-11-29 09:40:54.894124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:27.357 [2024-11-29 09:40:54.894172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:27.357 [2024-11-29 09:40:54.894193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.357 [2024-11-29 09:40:54.894236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:27.357 [2024-11-29 09:40:54.894303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:27.357 [2024-11-29 09:40:54.894325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:27.357 [2024-11-29 09:40:54.894343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.357 [2024-11-29 09:40:54.894392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:27.357 [2024-11-29 09:40:54.894414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:27.357 [2024-11-29 09:40:54.894437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:27.357 [2024-11-29 09:40:54.894455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.357 [2024-11-29 09:40:54.894532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:27.357 [2024-11-29 09:40:54.894609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:27.357 [2024-11-29 09:40:54.894654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:27.357 [2024-11-29 09:40:54.894702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.357 [2024-11-29 09:40:54.894827] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 119.575 ms, result 0 00:23:28.292 00:23:28.292 00:23:28.292 09:40:55 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:23:28.292 [2024-11-29 09:40:55.820112] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:23:28.292 [2024-11-29 09:40:55.820439] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92380 ] 00:23:28.292 [2024-11-29 09:40:55.953398] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:28.292 [2024-11-29 09:40:55.984810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.292 [2024-11-29 09:40:56.003883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.551 [2024-11-29 09:40:56.091606] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:28.551 [2024-11-29 09:40:56.091671] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:28.551 [2024-11-29 09:40:56.250004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.551 [2024-11-29 09:40:56.250047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:28.551 [2024-11-29 09:40:56.250059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:28.551 [2024-11-29 09:40:56.250068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.551 [2024-11-29 09:40:56.250124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.551 [2024-11-29 09:40:56.250134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:28.551 [2024-11-29 09:40:56.250142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:23:28.551 [2024-11-29 09:40:56.250152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.551 [2024-11-29 09:40:56.250174] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:28.551 [2024-11-29 09:40:56.250646] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:28.551 [2024-11-29 09:40:56.250683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.551 [2024-11-29 09:40:56.250695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:28.551 [2024-11-29 09:40:56.250705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.515 ms 00:23:28.551 [2024-11-29 09:40:56.250713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.551 [2024-11-29 09:40:56.251803] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:28.551 [2024-11-29 09:40:56.254419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.551 [2024-11-29 09:40:56.254448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:28.551 [2024-11-29 09:40:56.254468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.617 ms 00:23:28.551 [2024-11-29 09:40:56.254478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.551 [2024-11-29 09:40:56.254538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.551 [2024-11-29 09:40:56.254548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:28.551 [2024-11-29 09:40:56.254560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:23:28.551 [2024-11-29 09:40:56.254568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.551 [2024-11-29 09:40:56.259045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.551 [2024-11-29 09:40:56.259076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:28.551 [2024-11-29 09:40:56.259085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.400 ms 00:23:28.551 [2024-11-29 09:40:56.259092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.551 [2024-11-29 09:40:56.259177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.551 [2024-11-29 09:40:56.259186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:28.551 [2024-11-29 09:40:56.259194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:23:28.551 [2024-11-29 09:40:56.259201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.551 [2024-11-29 09:40:56.259242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.551 [2024-11-29 09:40:56.259251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:28.551 [2024-11-29 09:40:56.259261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:28.551 [2024-11-29 09:40:56.259269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.551 [2024-11-29 09:40:56.259290] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:28.551 [2024-11-29 09:40:56.260558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.551 [2024-11-29 09:40:56.260604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:28.552 [2024-11-29 09:40:56.260615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.273 ms 00:23:28.552 [2024-11-29 09:40:56.260623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.552 [2024-11-29 09:40:56.260652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.552 [2024-11-29 09:40:56.260662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:28.552 [2024-11-29 09:40:56.260676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:28.552 [2024-11-29 09:40:56.260687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.552 [2024-11-29 09:40:56.260707] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:28.552 [2024-11-29 09:40:56.260725] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:28.552 [2024-11-29 09:40:56.260760] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:28.552 [2024-11-29 09:40:56.260776] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:28.552 [2024-11-29 09:40:56.260879] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:28.552 [2024-11-29 09:40:56.260893] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:28.552 [2024-11-29 09:40:56.260904] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:28.552 [2024-11-29 09:40:56.260918] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:28.552 [2024-11-29 09:40:56.260927] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:28.552 [2024-11-29 09:40:56.260935] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:28.552 [2024-11-29 09:40:56.260942] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:28.552 [2024-11-29 09:40:56.260950] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:28.552 [2024-11-29 09:40:56.260960] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:28.552 [2024-11-29 09:40:56.260967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.552 [2024-11-29 09:40:56.260982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:28.552 [2024-11-29 09:40:56.260990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:23:28.552 [2024-11-29 09:40:56.260999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.552 [2024-11-29 09:40:56.261081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.552 [2024-11-29 09:40:56.261089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:28.552 [2024-11-29 09:40:56.261096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:23:28.552 [2024-11-29 09:40:56.261102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.552 [2024-11-29 09:40:56.261198] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:28.552 [2024-11-29 09:40:56.261207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:28.552 [2024-11-29 09:40:56.261215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:28.552 [2024-11-29 09:40:56.261223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:28.552 [2024-11-29 09:40:56.261232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:28.552 [2024-11-29 09:40:56.261239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:28.552 [2024-11-29 09:40:56.261253] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:28.552 [2024-11-29 09:40:56.261260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:28.552 [2024-11-29 09:40:56.261269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:28.552 [2024-11-29 09:40:56.261276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:28.552 [2024-11-29 09:40:56.261283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:28.552 [2024-11-29 09:40:56.261289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:28.552 [2024-11-29 09:40:56.261296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:28.552 [2024-11-29 09:40:56.261302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:28.552 [2024-11-29 09:40:56.261309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:28.552 [2024-11-29 09:40:56.261316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:28.552 [2024-11-29 09:40:56.261322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:28.552 [2024-11-29 09:40:56.261328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:28.552 [2024-11-29 09:40:56.261335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:28.552 [2024-11-29 09:40:56.261341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:28.552 [2024-11-29 09:40:56.261348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:28.552 [2024-11-29 09:40:56.261354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:28.552 [2024-11-29 09:40:56.261360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:28.552 [2024-11-29 09:40:56.261366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:28.552 [2024-11-29 09:40:56.261375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:28.552 [2024-11-29 09:40:56.261381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:28.552 [2024-11-29 09:40:56.261388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:28.552 [2024-11-29 09:40:56.261394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:28.552 [2024-11-29 09:40:56.261402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:28.552 [2024-11-29 09:40:56.261409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:28.552 [2024-11-29 09:40:56.261415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:28.552 [2024-11-29 09:40:56.261422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:28.552 [2024-11-29 09:40:56.261428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:28.552 [2024-11-29 09:40:56.261435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:28.552 [2024-11-29 09:40:56.261441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:28.552 [2024-11-29 09:40:56.261448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:28.552 [2024-11-29 09:40:56.261454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:28.552 [2024-11-29 09:40:56.261461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:28.552 [2024-11-29 09:40:56.261467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:28.552 [2024-11-29 09:40:56.261474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:28.552 [2024-11-29 09:40:56.261482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:28.552 [2024-11-29 09:40:56.261488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:28.552 [2024-11-29 09:40:56.261495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:28.552 [2024-11-29 09:40:56.261502] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:28.552 [2024-11-29 09:40:56.261509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:28.552 [2024-11-29 09:40:56.261516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:28.552 [2024-11-29 09:40:56.261524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:28.552 [2024-11-29 09:40:56.261531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:28.552 [2024-11-29 09:40:56.261537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:28.552 [2024-11-29 09:40:56.261544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:28.552 [2024-11-29 09:40:56.261551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:28.552 [2024-11-29 09:40:56.261557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:28.552 [2024-11-29 09:40:56.261564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:28.552 [2024-11-29 09:40:56.261572] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:28.552 [2024-11-29 09:40:56.261581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:28.552 [2024-11-29 09:40:56.261600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:28.552 [2024-11-29 09:40:56.261610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:28.552 [2024-11-29 09:40:56.261617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:28.552 [2024-11-29 09:40:56.261625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:28.552 [2024-11-29 09:40:56.261632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:28.552 [2024-11-29 09:40:56.261648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:28.552 [2024-11-29 09:40:56.261655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:28.552 [2024-11-29 09:40:56.261663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:28.552 [2024-11-29 09:40:56.261670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:28.552 [2024-11-29 09:40:56.261677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:28.552 [2024-11-29 09:40:56.261685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:28.552 [2024-11-29 09:40:56.261692] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:28.552 [2024-11-29 09:40:56.261699] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:28.552 [2024-11-29 09:40:56.261706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:28.552 [2024-11-29 09:40:56.261713] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:28.552 [2024-11-29 09:40:56.261722] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:28.552 [2024-11-29 09:40:56.261730] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:28.553 [2024-11-29 09:40:56.261740] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:28.553 [2024-11-29 09:40:56.261747] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:28.553 [2024-11-29 09:40:56.261754] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:28.553 [2024-11-29 09:40:56.261762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.553 [2024-11-29 09:40:56.261769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:28.553 [2024-11-29 09:40:56.261776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.632 ms 00:23:28.553 [2024-11-29 09:40:56.261785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.553 [2024-11-29 09:40:56.269977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.553 [2024-11-29 09:40:56.270010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:28.553 [2024-11-29 09:40:56.270020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.150 ms 00:23:28.553 [2024-11-29 09:40:56.270028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.553 [2024-11-29 09:40:56.270103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.553 [2024-11-29 09:40:56.270111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:28.553 [2024-11-29 09:40:56.270119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:23:28.553 [2024-11-29 09:40:56.270126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.811 [2024-11-29 09:40:56.286678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.811 [2024-11-29 09:40:56.286727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:28.811 [2024-11-29 09:40:56.286744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.497 ms 00:23:28.811 [2024-11-29 09:40:56.286755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.811 [2024-11-29 09:40:56.286810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.811 [2024-11-29 09:40:56.286830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:28.811 [2024-11-29 09:40:56.286843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:28.811 [2024-11-29 09:40:56.286857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.811 [2024-11-29 09:40:56.287270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.811 [2024-11-29 09:40:56.287301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:28.811 [2024-11-29 09:40:56.287316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:23:28.811 [2024-11-29 09:40:56.287328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.811 [2024-11-29 09:40:56.287504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.811 [2024-11-29 09:40:56.287522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:28.811 [2024-11-29 09:40:56.287534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:23:28.811 [2024-11-29 09:40:56.287545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.811 [2024-11-29 09:40:56.293512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.811 [2024-11-29 09:40:56.293550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:28.812 [2024-11-29 09:40:56.293563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.937 ms 00:23:28.812 [2024-11-29 09:40:56.293599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.812 [2024-11-29 09:40:56.295712] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:23:28.812 [2024-11-29 09:40:56.295745] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:28.812 [2024-11-29 09:40:56.295759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.812 [2024-11-29 09:40:56.295767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:28.812 [2024-11-29 09:40:56.295775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.030 ms 00:23:28.812 [2024-11-29 09:40:56.295782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.812 [2024-11-29 09:40:56.310035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.812 [2024-11-29 09:40:56.310064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:28.812 [2024-11-29 09:40:56.310075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.217 ms 00:23:28.812 [2024-11-29 09:40:56.310083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.812 [2024-11-29 09:40:56.311763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.812 [2024-11-29 09:40:56.311788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:28.812 [2024-11-29 09:40:56.311797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.645 ms 00:23:28.812 [2024-11-29 09:40:56.311804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.812 [2024-11-29 09:40:56.312975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.812 [2024-11-29 09:40:56.313000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:28.812 [2024-11-29 09:40:56.313008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.141 ms 00:23:28.812 [2024-11-29 09:40:56.313019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.812 [2024-11-29 09:40:56.313373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.812 [2024-11-29 09:40:56.313389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:28.812 [2024-11-29 09:40:56.313398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:23:28.812 [2024-11-29 09:40:56.313408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.812 [2024-11-29 09:40:56.327732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.812 [2024-11-29 09:40:56.327772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:28.812 [2024-11-29 09:40:56.327789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.306 ms 00:23:28.812 [2024-11-29 09:40:56.327797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.812 [2024-11-29 09:40:56.335125] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:28.812 [2024-11-29 09:40:56.337603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.812 [2024-11-29 09:40:56.337628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:28.812 [2024-11-29 09:40:56.337649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.769 ms 00:23:28.812 [2024-11-29 09:40:56.337658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.812 [2024-11-29 09:40:56.337714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.812 [2024-11-29 09:40:56.337725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:28.812 [2024-11-29 09:40:56.337733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:28.812 [2024-11-29 09:40:56.337748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.812 [2024-11-29 09:40:56.339092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.812 [2024-11-29 09:40:56.339121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:28.812 [2024-11-29 09:40:56.339134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.307 ms 00:23:28.812 [2024-11-29 09:40:56.339145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.812 [2024-11-29 09:40:56.339168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.812 [2024-11-29 09:40:56.339176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:28.812 [2024-11-29 09:40:56.339184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:28.812 [2024-11-29 09:40:56.339191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.812 [2024-11-29 09:40:56.339224] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:28.812 [2024-11-29 09:40:56.339233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.812 [2024-11-29 09:40:56.339240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:28.812 [2024-11-29 09:40:56.339252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:28.812 [2024-11-29 09:40:56.339259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.812 [2024-11-29 09:40:56.342239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.812 [2024-11-29 09:40:56.342268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:28.812 [2024-11-29 09:40:56.342277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.964 ms 00:23:28.812 [2024-11-29 09:40:56.342284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.812 [2024-11-29 09:40:56.342350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.812 [2024-11-29 09:40:56.342359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:28.812 [2024-11-29 09:40:56.342369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:23:28.812 [2024-11-29 09:40:56.342378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.812 [2024-11-29 09:40:56.343248] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 92.877 ms, result 0 00:23:30.187  [2024-11-29T09:40:58.849Z] Copying: 41/1024 [MB] (41 MBps) [2024-11-29T09:40:59.782Z] Copying: 89/1024 [MB] (48 MBps) [2024-11-29T09:41:00.771Z] Copying: 139/1024 [MB] (50 MBps) [2024-11-29T09:41:01.705Z] Copying: 188/1024 [MB] (48 MBps) [2024-11-29T09:41:02.639Z] Copying: 237/1024 [MB] (49 MBps) [2024-11-29T09:41:03.573Z] Copying: 284/1024 [MB] (47 MBps) [2024-11-29T09:41:04.944Z] Copying: 332/1024 [MB] (47 MBps) [2024-11-29T09:41:05.888Z] Copying: 382/1024 [MB] (50 MBps) [2024-11-29T09:41:06.819Z] Copying: 429/1024 [MB] (46 MBps) [2024-11-29T09:41:07.751Z] Copying: 477/1024 [MB] (47 MBps) [2024-11-29T09:41:08.683Z] Copying: 526/1024 [MB] (49 MBps) [2024-11-29T09:41:09.615Z] Copying: 573/1024 [MB] (47 MBps) [2024-11-29T09:41:10.548Z] Copying: 622/1024 [MB] (48 MBps) [2024-11-29T09:41:11.922Z] Copying: 668/1024 [MB] (45 MBps) [2024-11-29T09:41:12.855Z] Copying: 717/1024 [MB] (48 MBps) [2024-11-29T09:41:13.812Z] Copying: 764/1024 [MB] (47 MBps) [2024-11-29T09:41:14.791Z] Copying: 811/1024 [MB] (47 MBps) [2024-11-29T09:41:15.723Z] Copying: 860/1024 [MB] (48 MBps) [2024-11-29T09:41:16.657Z] Copying: 906/1024 [MB] (46 MBps) [2024-11-29T09:41:17.590Z] Copying: 954/1024 [MB] (48 MBps) [2024-11-29T09:41:18.156Z] Copying: 1000/1024 [MB] (45 MBps) [2024-11-29T09:41:18.728Z] Copying: 1024/1024 [MB] (average 47 MBps)[2024-11-29 09:41:18.450929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.002 [2024-11-29 09:41:18.450986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:51.002 [2024-11-29 09:41:18.451000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:51.002 [2024-11-29 09:41:18.451009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.002 [2024-11-29 09:41:18.451031] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:51.002 [2024-11-29 09:41:18.451522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.002 [2024-11-29 09:41:18.451543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:51.002 [2024-11-29 09:41:18.451552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.477 ms 00:23:51.002 [2024-11-29 09:41:18.451560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.002 [2024-11-29 09:41:18.451819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.002 [2024-11-29 09:41:18.451837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:51.002 [2024-11-29 09:41:18.451847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:23:51.002 [2024-11-29 09:41:18.451855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.002 [2024-11-29 09:41:18.456573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.002 [2024-11-29 09:41:18.456618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:51.002 [2024-11-29 09:41:18.456628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.702 ms 00:23:51.002 [2024-11-29 09:41:18.456640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.002 [2024-11-29 09:41:18.463196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.002 [2024-11-29 09:41:18.463227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:51.002 [2024-11-29 09:41:18.463238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.528 ms 00:23:51.002 [2024-11-29 09:41:18.463246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.002 [2024-11-29 09:41:18.464384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.002 [2024-11-29 09:41:18.464416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:51.002 [2024-11-29 09:41:18.464425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.091 ms 00:23:51.002 [2024-11-29 09:41:18.464433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.002 [2024-11-29 09:41:18.467231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.002 [2024-11-29 09:41:18.467276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:51.002 [2024-11-29 09:41:18.467294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.770 ms 00:23:51.002 [2024-11-29 09:41:18.467307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.002 [2024-11-29 09:41:18.522106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.002 [2024-11-29 09:41:18.522158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:51.002 [2024-11-29 09:41:18.522170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.682 ms 00:23:51.002 [2024-11-29 09:41:18.522178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.002 [2024-11-29 09:41:18.523780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.002 [2024-11-29 09:41:18.523811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:51.002 [2024-11-29 09:41:18.523830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.587 ms 00:23:51.002 [2024-11-29 09:41:18.523837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.002 [2024-11-29 09:41:18.524993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.002 [2024-11-29 09:41:18.525027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:51.002 [2024-11-29 09:41:18.525035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.127 ms 00:23:51.002 [2024-11-29 09:41:18.525043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.002 [2024-11-29 09:41:18.525944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.002 [2024-11-29 09:41:18.525976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:51.002 [2024-11-29 09:41:18.525985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.875 ms 00:23:51.002 [2024-11-29 09:41:18.525991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.002 [2024-11-29 09:41:18.526760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.002 [2024-11-29 09:41:18.526789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:51.002 [2024-11-29 09:41:18.526798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.720 ms 00:23:51.002 [2024-11-29 09:41:18.526804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.002 [2024-11-29 09:41:18.526829] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:51.002 [2024-11-29 09:41:18.526842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:23:51.002 [2024-11-29 09:41:18.526852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:51.002 [2024-11-29 09:41:18.526861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:51.002 [2024-11-29 09:41:18.526868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:51.002 [2024-11-29 09:41:18.526876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:51.002 [2024-11-29 09:41:18.526883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:51.002 [2024-11-29 09:41:18.526891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:51.002 [2024-11-29 09:41:18.526898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:51.002 [2024-11-29 09:41:18.526906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:51.002 [2024-11-29 09:41:18.526913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:51.002 [2024-11-29 09:41:18.526921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:51.002 [2024-11-29 09:41:18.526928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:51.002 [2024-11-29 09:41:18.526936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:51.002 [2024-11-29 09:41:18.526943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:51.002 [2024-11-29 09:41:18.526951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:51.002 [2024-11-29 09:41:18.526958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:51.002 [2024-11-29 09:41:18.526965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:51.002 [2024-11-29 09:41:18.526973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:51.002 [2024-11-29 09:41:18.526981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:51.002 [2024-11-29 09:41:18.526988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:51.002 [2024-11-29 09:41:18.526995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:51.002 [2024-11-29 09:41:18.527003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:51.002 [2024-11-29 09:41:18.527010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:51.002 [2024-11-29 09:41:18.527017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:51.002 [2024-11-29 09:41:18.527025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:51.002 [2024-11-29 09:41:18.527032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:51.002 [2024-11-29 09:41:18.527040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:51.002 [2024-11-29 09:41:18.527047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:51.002 [2024-11-29 09:41:18.527054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:51.002 [2024-11-29 09:41:18.527062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:51.002 [2024-11-29 09:41:18.527069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:51.002 [2024-11-29 09:41:18.527076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:51.003 [2024-11-29 09:41:18.527602] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:51.003 [2024-11-29 09:41:18.527610] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d7621d15-6b86-4809-9c43-c56efb3866c5 00:23:51.003 [2024-11-29 09:41:18.527627] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:23:51.003 [2024-11-29 09:41:18.527639] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 21440 00:23:51.003 [2024-11-29 09:41:18.527646] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 20480 00:23:51.003 [2024-11-29 09:41:18.527654] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0469 00:23:51.003 [2024-11-29 09:41:18.527661] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:51.003 [2024-11-29 09:41:18.527668] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:51.003 [2024-11-29 09:41:18.527676] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:51.003 [2024-11-29 09:41:18.527682] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:51.003 [2024-11-29 09:41:18.527691] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:51.003 [2024-11-29 09:41:18.527699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.003 [2024-11-29 09:41:18.527706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:51.003 [2024-11-29 09:41:18.527721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.871 ms 00:23:51.003 [2024-11-29 09:41:18.527728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.003 [2024-11-29 09:41:18.529062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.003 [2024-11-29 09:41:18.529085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:51.003 [2024-11-29 09:41:18.529094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.316 ms 00:23:51.003 [2024-11-29 09:41:18.529101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.003 [2024-11-29 09:41:18.529176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.003 [2024-11-29 09:41:18.529186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:51.003 [2024-11-29 09:41:18.529194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:23:51.003 [2024-11-29 09:41:18.529204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.003 [2024-11-29 09:41:18.533754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.003 [2024-11-29 09:41:18.533784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:51.003 [2024-11-29 09:41:18.533792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.004 [2024-11-29 09:41:18.533800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.004 [2024-11-29 09:41:18.533851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.004 [2024-11-29 09:41:18.533859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:51.004 [2024-11-29 09:41:18.533866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.004 [2024-11-29 09:41:18.533876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.004 [2024-11-29 09:41:18.533926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.004 [2024-11-29 09:41:18.533935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:51.004 [2024-11-29 09:41:18.533943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.004 [2024-11-29 09:41:18.533950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.004 [2024-11-29 09:41:18.533964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.004 [2024-11-29 09:41:18.533972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:51.004 [2024-11-29 09:41:18.533979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.004 [2024-11-29 09:41:18.533986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.004 [2024-11-29 09:41:18.542271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.004 [2024-11-29 09:41:18.542309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:51.004 [2024-11-29 09:41:18.542319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.004 [2024-11-29 09:41:18.542327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.004 [2024-11-29 09:41:18.548696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.004 [2024-11-29 09:41:18.548738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:51.004 [2024-11-29 09:41:18.548748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.004 [2024-11-29 09:41:18.548762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.004 [2024-11-29 09:41:18.548785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.004 [2024-11-29 09:41:18.548793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:51.004 [2024-11-29 09:41:18.548801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.004 [2024-11-29 09:41:18.548808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.004 [2024-11-29 09:41:18.548853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.004 [2024-11-29 09:41:18.548862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:51.004 [2024-11-29 09:41:18.548870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.004 [2024-11-29 09:41:18.548877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.004 [2024-11-29 09:41:18.548941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.004 [2024-11-29 09:41:18.548960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:51.004 [2024-11-29 09:41:18.548972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.004 [2024-11-29 09:41:18.548980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.004 [2024-11-29 09:41:18.549010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.004 [2024-11-29 09:41:18.549022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:51.004 [2024-11-29 09:41:18.549030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.004 [2024-11-29 09:41:18.549037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.004 [2024-11-29 09:41:18.549071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.004 [2024-11-29 09:41:18.549079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:51.004 [2024-11-29 09:41:18.549087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.004 [2024-11-29 09:41:18.549094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.004 [2024-11-29 09:41:18.549131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.004 [2024-11-29 09:41:18.549140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:51.004 [2024-11-29 09:41:18.549147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.004 [2024-11-29 09:41:18.549154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.004 [2024-11-29 09:41:18.549262] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 98.311 ms, result 0 00:23:51.263 00:23:51.263 00:23:51.263 09:41:18 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:53.797 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:23:53.797 09:41:21 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:23:53.797 09:41:21 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:23:53.797 09:41:21 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:53.797 09:41:21 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:53.797 09:41:21 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:53.797 09:41:21 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 90034 00:23:53.797 09:41:21 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 90034 ']' 00:23:53.797 09:41:21 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 90034 00:23:53.797 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (90034) - No such process 00:23:53.797 Process with pid 90034 is not found 00:23:53.797 09:41:21 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 90034 is not found' 00:23:53.797 09:41:21 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:23:53.797 Remove shared memory files 00:23:53.797 09:41:21 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:23:53.797 09:41:21 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:23:53.797 09:41:21 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:23:53.797 09:41:21 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:23:53.797 09:41:21 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:23:53.797 09:41:21 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:23:53.797 00:23:53.797 real 4m10.802s 00:23:53.797 user 3m58.114s 00:23:53.797 sys 0m12.566s 00:23:53.797 09:41:21 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:53.797 09:41:21 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:23:53.797 ************************************ 00:23:53.797 END TEST ftl_restore 00:23:53.797 ************************************ 00:23:53.797 09:41:21 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:23:53.797 09:41:21 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:53.797 09:41:21 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:53.797 09:41:21 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:53.797 ************************************ 00:23:53.797 START TEST ftl_dirty_shutdown 00:23:53.797 ************************************ 00:23:53.797 09:41:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:23:53.797 * Looking for test storage... 00:23:53.797 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:53.797 09:41:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:53.797 09:41:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:53.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.798 --rc genhtml_branch_coverage=1 00:23:53.798 --rc genhtml_function_coverage=1 00:23:53.798 --rc genhtml_legend=1 00:23:53.798 --rc geninfo_all_blocks=1 00:23:53.798 --rc geninfo_unexecuted_blocks=1 00:23:53.798 00:23:53.798 ' 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:53.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.798 --rc genhtml_branch_coverage=1 00:23:53.798 --rc genhtml_function_coverage=1 00:23:53.798 --rc genhtml_legend=1 00:23:53.798 --rc geninfo_all_blocks=1 00:23:53.798 --rc geninfo_unexecuted_blocks=1 00:23:53.798 00:23:53.798 ' 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:53.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.798 --rc genhtml_branch_coverage=1 00:23:53.798 --rc genhtml_function_coverage=1 00:23:53.798 --rc genhtml_legend=1 00:23:53.798 --rc geninfo_all_blocks=1 00:23:53.798 --rc geninfo_unexecuted_blocks=1 00:23:53.798 00:23:53.798 ' 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:53.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.798 --rc genhtml_branch_coverage=1 00:23:53.798 --rc genhtml_function_coverage=1 00:23:53.798 --rc genhtml_legend=1 00:23:53.798 --rc geninfo_all_blocks=1 00:23:53.798 --rc geninfo_unexecuted_blocks=1 00:23:53.798 00:23:53.798 ' 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=92710 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 92710 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 92710 ']' 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:53.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:53.798 09:41:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:53.798 [2024-11-29 09:41:21.435710] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:23:53.798 [2024-11-29 09:41:21.435823] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92710 ] 00:23:54.057 [2024-11-29 09:41:21.567629] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:54.057 [2024-11-29 09:41:21.596606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.057 [2024-11-29 09:41:21.615030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:54.625 09:41:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:54.625 09:41:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:23:54.625 09:41:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:54.625 09:41:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:23:54.625 09:41:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:54.625 09:41:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:23:54.625 09:41:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:23:54.625 09:41:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:54.884 09:41:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:54.884 09:41:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:23:54.884 09:41:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:54.884 09:41:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:23:54.884 09:41:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:54.884 09:41:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:23:54.884 09:41:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:23:54.884 09:41:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:55.143 09:41:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:55.143 { 00:23:55.143 "name": "nvme0n1", 00:23:55.143 "aliases": [ 00:23:55.143 "d3ff20cf-33da-482d-acf0-e94e859a914e" 00:23:55.143 ], 00:23:55.143 "product_name": "NVMe disk", 00:23:55.143 "block_size": 4096, 00:23:55.143 "num_blocks": 1310720, 00:23:55.143 "uuid": "d3ff20cf-33da-482d-acf0-e94e859a914e", 00:23:55.143 "numa_id": -1, 00:23:55.143 "assigned_rate_limits": { 00:23:55.143 "rw_ios_per_sec": 0, 00:23:55.143 "rw_mbytes_per_sec": 0, 00:23:55.143 "r_mbytes_per_sec": 0, 00:23:55.143 "w_mbytes_per_sec": 0 00:23:55.143 }, 00:23:55.143 "claimed": true, 00:23:55.143 "claim_type": "read_many_write_one", 00:23:55.143 "zoned": false, 00:23:55.143 "supported_io_types": { 00:23:55.143 "read": true, 00:23:55.143 "write": true, 00:23:55.143 "unmap": true, 00:23:55.143 "flush": true, 00:23:55.143 "reset": true, 00:23:55.143 "nvme_admin": true, 00:23:55.143 "nvme_io": true, 00:23:55.143 "nvme_io_md": false, 00:23:55.143 "write_zeroes": true, 00:23:55.143 "zcopy": false, 00:23:55.143 "get_zone_info": false, 00:23:55.143 "zone_management": false, 00:23:55.143 "zone_append": false, 00:23:55.143 "compare": true, 00:23:55.143 "compare_and_write": false, 00:23:55.143 "abort": true, 00:23:55.143 "seek_hole": false, 00:23:55.143 "seek_data": false, 00:23:55.143 "copy": true, 00:23:55.143 "nvme_iov_md": false 00:23:55.143 }, 00:23:55.143 "driver_specific": { 00:23:55.143 "nvme": [ 00:23:55.143 { 00:23:55.143 "pci_address": "0000:00:11.0", 00:23:55.143 "trid": { 00:23:55.143 "trtype": "PCIe", 00:23:55.143 "traddr": "0000:00:11.0" 00:23:55.143 }, 00:23:55.143 "ctrlr_data": { 00:23:55.143 "cntlid": 0, 00:23:55.143 "vendor_id": "0x1b36", 00:23:55.143 "model_number": "QEMU NVMe Ctrl", 00:23:55.143 "serial_number": "12341", 00:23:55.143 "firmware_revision": "8.0.0", 00:23:55.143 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:55.143 "oacs": { 00:23:55.143 "security": 0, 00:23:55.143 "format": 1, 00:23:55.143 "firmware": 0, 00:23:55.143 "ns_manage": 1 00:23:55.143 }, 00:23:55.143 "multi_ctrlr": false, 00:23:55.144 "ana_reporting": false 00:23:55.144 }, 00:23:55.144 "vs": { 00:23:55.144 "nvme_version": "1.4" 00:23:55.144 }, 00:23:55.144 "ns_data": { 00:23:55.144 "id": 1, 00:23:55.144 "can_share": false 00:23:55.144 } 00:23:55.144 } 00:23:55.144 ], 00:23:55.144 "mp_policy": "active_passive" 00:23:55.144 } 00:23:55.144 } 00:23:55.144 ]' 00:23:55.144 09:41:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:55.144 09:41:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:23:55.144 09:41:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:55.144 09:41:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:23:55.144 09:41:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:23:55.144 09:41:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:23:55.144 09:41:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:23:55.144 09:41:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:55.144 09:41:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:23:55.144 09:41:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:55.144 09:41:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:55.402 09:41:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=8b29b896-92f5-41a6-8bf9-a7a338cd7730 00:23:55.402 09:41:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:23:55.402 09:41:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8b29b896-92f5-41a6-8bf9-a7a338cd7730 00:23:55.661 09:41:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:55.921 09:41:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=d9196392-dbc6-42b7-8b89-4f1a7d662a6d 00:23:55.921 09:41:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u d9196392-dbc6-42b7-8b89-4f1a7d662a6d 00:23:55.921 09:41:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=0933186e-1ee8-4262-9f5d-ff28f382d45c 00:23:55.921 09:41:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:23:55.921 09:41:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 0933186e-1ee8-4262-9f5d-ff28f382d45c 00:23:55.921 09:41:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:23:55.921 09:41:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:55.921 09:41:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=0933186e-1ee8-4262-9f5d-ff28f382d45c 00:23:55.921 09:41:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:23:56.179 09:41:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 0933186e-1ee8-4262-9f5d-ff28f382d45c 00:23:56.179 09:41:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=0933186e-1ee8-4262-9f5d-ff28f382d45c 00:23:56.179 09:41:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:56.179 09:41:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:23:56.179 09:41:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:23:56.179 09:41:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0933186e-1ee8-4262-9f5d-ff28f382d45c 00:23:56.179 09:41:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:56.179 { 00:23:56.179 "name": "0933186e-1ee8-4262-9f5d-ff28f382d45c", 00:23:56.179 "aliases": [ 00:23:56.179 "lvs/nvme0n1p0" 00:23:56.179 ], 00:23:56.179 "product_name": "Logical Volume", 00:23:56.179 "block_size": 4096, 00:23:56.179 "num_blocks": 26476544, 00:23:56.179 "uuid": "0933186e-1ee8-4262-9f5d-ff28f382d45c", 00:23:56.179 "assigned_rate_limits": { 00:23:56.179 "rw_ios_per_sec": 0, 00:23:56.179 "rw_mbytes_per_sec": 0, 00:23:56.179 "r_mbytes_per_sec": 0, 00:23:56.179 "w_mbytes_per_sec": 0 00:23:56.179 }, 00:23:56.179 "claimed": false, 00:23:56.179 "zoned": false, 00:23:56.179 "supported_io_types": { 00:23:56.179 "read": true, 00:23:56.179 "write": true, 00:23:56.179 "unmap": true, 00:23:56.179 "flush": false, 00:23:56.179 "reset": true, 00:23:56.179 "nvme_admin": false, 00:23:56.179 "nvme_io": false, 00:23:56.179 "nvme_io_md": false, 00:23:56.179 "write_zeroes": true, 00:23:56.179 "zcopy": false, 00:23:56.179 "get_zone_info": false, 00:23:56.179 "zone_management": false, 00:23:56.179 "zone_append": false, 00:23:56.179 "compare": false, 00:23:56.179 "compare_and_write": false, 00:23:56.179 "abort": false, 00:23:56.179 "seek_hole": true, 00:23:56.180 "seek_data": true, 00:23:56.180 "copy": false, 00:23:56.180 "nvme_iov_md": false 00:23:56.180 }, 00:23:56.180 "driver_specific": { 00:23:56.180 "lvol": { 00:23:56.180 "lvol_store_uuid": "d9196392-dbc6-42b7-8b89-4f1a7d662a6d", 00:23:56.180 "base_bdev": "nvme0n1", 00:23:56.180 "thin_provision": true, 00:23:56.180 "num_allocated_clusters": 0, 00:23:56.180 "snapshot": false, 00:23:56.180 "clone": false, 00:23:56.180 "esnap_clone": false 00:23:56.180 } 00:23:56.180 } 00:23:56.180 } 00:23:56.180 ]' 00:23:56.180 09:41:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:56.180 09:41:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:23:56.180 09:41:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:56.460 09:41:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:56.460 09:41:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:56.460 09:41:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:23:56.460 09:41:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:23:56.460 09:41:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:23:56.460 09:41:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:56.748 09:41:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:56.748 09:41:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:56.748 09:41:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 0933186e-1ee8-4262-9f5d-ff28f382d45c 00:23:56.748 09:41:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=0933186e-1ee8-4262-9f5d-ff28f382d45c 00:23:56.748 09:41:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:56.748 09:41:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:23:56.748 09:41:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:23:56.748 09:41:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0933186e-1ee8-4262-9f5d-ff28f382d45c 00:23:56.748 09:41:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:56.748 { 00:23:56.748 "name": "0933186e-1ee8-4262-9f5d-ff28f382d45c", 00:23:56.748 "aliases": [ 00:23:56.748 "lvs/nvme0n1p0" 00:23:56.748 ], 00:23:56.748 "product_name": "Logical Volume", 00:23:56.748 "block_size": 4096, 00:23:56.748 "num_blocks": 26476544, 00:23:56.748 "uuid": "0933186e-1ee8-4262-9f5d-ff28f382d45c", 00:23:56.748 "assigned_rate_limits": { 00:23:56.748 "rw_ios_per_sec": 0, 00:23:56.748 "rw_mbytes_per_sec": 0, 00:23:56.748 "r_mbytes_per_sec": 0, 00:23:56.748 "w_mbytes_per_sec": 0 00:23:56.748 }, 00:23:56.748 "claimed": false, 00:23:56.748 "zoned": false, 00:23:56.748 "supported_io_types": { 00:23:56.748 "read": true, 00:23:56.748 "write": true, 00:23:56.748 "unmap": true, 00:23:56.748 "flush": false, 00:23:56.748 "reset": true, 00:23:56.748 "nvme_admin": false, 00:23:56.748 "nvme_io": false, 00:23:56.749 "nvme_io_md": false, 00:23:56.749 "write_zeroes": true, 00:23:56.749 "zcopy": false, 00:23:56.749 "get_zone_info": false, 00:23:56.749 "zone_management": false, 00:23:56.749 "zone_append": false, 00:23:56.749 "compare": false, 00:23:56.749 "compare_and_write": false, 00:23:56.749 "abort": false, 00:23:56.749 "seek_hole": true, 00:23:56.749 "seek_data": true, 00:23:56.749 "copy": false, 00:23:56.749 "nvme_iov_md": false 00:23:56.749 }, 00:23:56.749 "driver_specific": { 00:23:56.749 "lvol": { 00:23:56.749 "lvol_store_uuid": "d9196392-dbc6-42b7-8b89-4f1a7d662a6d", 00:23:56.749 "base_bdev": "nvme0n1", 00:23:56.749 "thin_provision": true, 00:23:56.749 "num_allocated_clusters": 0, 00:23:56.749 "snapshot": false, 00:23:56.749 "clone": false, 00:23:56.749 "esnap_clone": false 00:23:56.749 } 00:23:56.749 } 00:23:56.749 } 00:23:56.749 ]' 00:23:56.749 09:41:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:56.749 09:41:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:23:56.749 09:41:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:56.749 09:41:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:56.749 09:41:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:56.749 09:41:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:23:56.749 09:41:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:23:56.749 09:41:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:57.008 09:41:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:23:57.008 09:41:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 0933186e-1ee8-4262-9f5d-ff28f382d45c 00:23:57.008 09:41:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=0933186e-1ee8-4262-9f5d-ff28f382d45c 00:23:57.008 09:41:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:57.008 09:41:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:23:57.008 09:41:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:23:57.008 09:41:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0933186e-1ee8-4262-9f5d-ff28f382d45c 00:23:57.266 09:41:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:57.266 { 00:23:57.266 "name": "0933186e-1ee8-4262-9f5d-ff28f382d45c", 00:23:57.266 "aliases": [ 00:23:57.266 "lvs/nvme0n1p0" 00:23:57.266 ], 00:23:57.266 "product_name": "Logical Volume", 00:23:57.266 "block_size": 4096, 00:23:57.266 "num_blocks": 26476544, 00:23:57.266 "uuid": "0933186e-1ee8-4262-9f5d-ff28f382d45c", 00:23:57.266 "assigned_rate_limits": { 00:23:57.266 "rw_ios_per_sec": 0, 00:23:57.266 "rw_mbytes_per_sec": 0, 00:23:57.266 "r_mbytes_per_sec": 0, 00:23:57.266 "w_mbytes_per_sec": 0 00:23:57.266 }, 00:23:57.266 "claimed": false, 00:23:57.266 "zoned": false, 00:23:57.266 "supported_io_types": { 00:23:57.266 "read": true, 00:23:57.266 "write": true, 00:23:57.266 "unmap": true, 00:23:57.266 "flush": false, 00:23:57.266 "reset": true, 00:23:57.266 "nvme_admin": false, 00:23:57.266 "nvme_io": false, 00:23:57.266 "nvme_io_md": false, 00:23:57.266 "write_zeroes": true, 00:23:57.266 "zcopy": false, 00:23:57.266 "get_zone_info": false, 00:23:57.266 "zone_management": false, 00:23:57.266 "zone_append": false, 00:23:57.266 "compare": false, 00:23:57.266 "compare_and_write": false, 00:23:57.266 "abort": false, 00:23:57.266 "seek_hole": true, 00:23:57.266 "seek_data": true, 00:23:57.266 "copy": false, 00:23:57.266 "nvme_iov_md": false 00:23:57.266 }, 00:23:57.266 "driver_specific": { 00:23:57.266 "lvol": { 00:23:57.266 "lvol_store_uuid": "d9196392-dbc6-42b7-8b89-4f1a7d662a6d", 00:23:57.266 "base_bdev": "nvme0n1", 00:23:57.266 "thin_provision": true, 00:23:57.266 "num_allocated_clusters": 0, 00:23:57.266 "snapshot": false, 00:23:57.266 "clone": false, 00:23:57.266 "esnap_clone": false 00:23:57.266 } 00:23:57.266 } 00:23:57.266 } 00:23:57.266 ]' 00:23:57.266 09:41:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:57.266 09:41:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:23:57.266 09:41:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:57.266 09:41:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:57.266 09:41:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:57.266 09:41:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:23:57.266 09:41:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:23:57.266 09:41:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 0933186e-1ee8-4262-9f5d-ff28f382d45c --l2p_dram_limit 10' 00:23:57.266 09:41:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:23:57.266 09:41:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:23:57.266 09:41:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:23:57.266 09:41:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 0933186e-1ee8-4262-9f5d-ff28f382d45c --l2p_dram_limit 10 -c nvc0n1p0 00:23:57.540 [2024-11-29 09:41:25.091628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.540 [2024-11-29 09:41:25.091677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:57.540 [2024-11-29 09:41:25.091690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:57.540 [2024-11-29 09:41:25.091697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.540 [2024-11-29 09:41:25.091745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.540 [2024-11-29 09:41:25.091756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:57.540 [2024-11-29 09:41:25.091766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:23:57.540 [2024-11-29 09:41:25.091772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.540 [2024-11-29 09:41:25.091789] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:57.540 [2024-11-29 09:41:25.092258] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:57.540 [2024-11-29 09:41:25.092294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.540 [2024-11-29 09:41:25.092302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:57.540 [2024-11-29 09:41:25.092311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.509 ms 00:23:57.540 [2024-11-29 09:41:25.092317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.540 [2024-11-29 09:41:25.092425] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 9bdcd7e5-faf1-4d3d-9a3e-4f068cf2336b 00:23:57.540 [2024-11-29 09:41:25.093361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.540 [2024-11-29 09:41:25.093388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:57.540 [2024-11-29 09:41:25.093397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:23:57.540 [2024-11-29 09:41:25.093404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.540 [2024-11-29 09:41:25.097946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.540 [2024-11-29 09:41:25.097975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:57.540 [2024-11-29 09:41:25.097983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.507 ms 00:23:57.540 [2024-11-29 09:41:25.097997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.540 [2024-11-29 09:41:25.098068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.540 [2024-11-29 09:41:25.098077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:57.540 [2024-11-29 09:41:25.098084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:23:57.540 [2024-11-29 09:41:25.098091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.540 [2024-11-29 09:41:25.098135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.540 [2024-11-29 09:41:25.098146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:57.540 [2024-11-29 09:41:25.098152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:57.540 [2024-11-29 09:41:25.098159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.540 [2024-11-29 09:41:25.098176] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:57.540 [2024-11-29 09:41:25.099407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.540 [2024-11-29 09:41:25.099434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:57.540 [2024-11-29 09:41:25.099444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.233 ms 00:23:57.540 [2024-11-29 09:41:25.099450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.540 [2024-11-29 09:41:25.099478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.540 [2024-11-29 09:41:25.099485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:57.540 [2024-11-29 09:41:25.099495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:57.540 [2024-11-29 09:41:25.099502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.540 [2024-11-29 09:41:25.099523] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:57.540 [2024-11-29 09:41:25.099734] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:57.540 [2024-11-29 09:41:25.099751] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:57.540 [2024-11-29 09:41:25.099761] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:57.540 [2024-11-29 09:41:25.099777] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:57.540 [2024-11-29 09:41:25.099785] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:57.540 [2024-11-29 09:41:25.099796] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:57.540 [2024-11-29 09:41:25.099804] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:57.540 [2024-11-29 09:41:25.099813] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:57.540 [2024-11-29 09:41:25.099819] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:57.540 [2024-11-29 09:41:25.099826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.540 [2024-11-29 09:41:25.099832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:57.540 [2024-11-29 09:41:25.099839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:23:57.540 [2024-11-29 09:41:25.099844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.540 [2024-11-29 09:41:25.099909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.540 [2024-11-29 09:41:25.099915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:57.540 [2024-11-29 09:41:25.099924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:23:57.541 [2024-11-29 09:41:25.099929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.541 [2024-11-29 09:41:25.100003] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:57.541 [2024-11-29 09:41:25.100010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:57.541 [2024-11-29 09:41:25.100017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:57.541 [2024-11-29 09:41:25.100023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:57.541 [2024-11-29 09:41:25.100030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:57.541 [2024-11-29 09:41:25.100035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:57.541 [2024-11-29 09:41:25.100041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:57.541 [2024-11-29 09:41:25.100046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:57.541 [2024-11-29 09:41:25.100053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:57.541 [2024-11-29 09:41:25.100058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:57.541 [2024-11-29 09:41:25.100066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:57.541 [2024-11-29 09:41:25.100072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:57.541 [2024-11-29 09:41:25.100080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:57.541 [2024-11-29 09:41:25.100085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:57.541 [2024-11-29 09:41:25.100091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:57.541 [2024-11-29 09:41:25.100097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:57.541 [2024-11-29 09:41:25.100104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:57.541 [2024-11-29 09:41:25.100109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:57.541 [2024-11-29 09:41:25.100115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:57.541 [2024-11-29 09:41:25.100120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:57.541 [2024-11-29 09:41:25.100126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:57.541 [2024-11-29 09:41:25.100131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:57.541 [2024-11-29 09:41:25.100137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:57.541 [2024-11-29 09:41:25.100142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:57.541 [2024-11-29 09:41:25.100148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:57.541 [2024-11-29 09:41:25.100153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:57.541 [2024-11-29 09:41:25.100159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:57.541 [2024-11-29 09:41:25.100163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:57.541 [2024-11-29 09:41:25.100171] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:57.541 [2024-11-29 09:41:25.100176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:57.541 [2024-11-29 09:41:25.100182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:57.541 [2024-11-29 09:41:25.100187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:57.541 [2024-11-29 09:41:25.100192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:57.541 [2024-11-29 09:41:25.100197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:57.541 [2024-11-29 09:41:25.100203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:57.541 [2024-11-29 09:41:25.100208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:57.541 [2024-11-29 09:41:25.100215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:57.541 [2024-11-29 09:41:25.100220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:57.541 [2024-11-29 09:41:25.100225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:57.541 [2024-11-29 09:41:25.100230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:57.541 [2024-11-29 09:41:25.100236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:57.541 [2024-11-29 09:41:25.100240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:57.541 [2024-11-29 09:41:25.100246] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:57.541 [2024-11-29 09:41:25.100251] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:57.541 [2024-11-29 09:41:25.100259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:57.541 [2024-11-29 09:41:25.100264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:57.541 [2024-11-29 09:41:25.100271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:57.541 [2024-11-29 09:41:25.100278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:57.541 [2024-11-29 09:41:25.100285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:57.541 [2024-11-29 09:41:25.100290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:57.541 [2024-11-29 09:41:25.100296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:57.541 [2024-11-29 09:41:25.100301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:57.541 [2024-11-29 09:41:25.100307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:57.541 [2024-11-29 09:41:25.100314] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:57.541 [2024-11-29 09:41:25.100322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:57.541 [2024-11-29 09:41:25.100328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:57.541 [2024-11-29 09:41:25.100335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:57.541 [2024-11-29 09:41:25.100340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:57.541 [2024-11-29 09:41:25.100346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:57.541 [2024-11-29 09:41:25.100352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:57.541 [2024-11-29 09:41:25.100360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:57.541 [2024-11-29 09:41:25.100366] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:57.541 [2024-11-29 09:41:25.100372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:57.541 [2024-11-29 09:41:25.100377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:57.541 [2024-11-29 09:41:25.100384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:57.541 [2024-11-29 09:41:25.100389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:57.541 [2024-11-29 09:41:25.100395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:57.541 [2024-11-29 09:41:25.100400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:57.541 [2024-11-29 09:41:25.100406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:57.541 [2024-11-29 09:41:25.100411] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:57.541 [2024-11-29 09:41:25.100419] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:57.541 [2024-11-29 09:41:25.100425] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:57.541 [2024-11-29 09:41:25.100431] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:57.541 [2024-11-29 09:41:25.100437] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:57.541 [2024-11-29 09:41:25.100443] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:57.541 [2024-11-29 09:41:25.100449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.541 [2024-11-29 09:41:25.100457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:57.541 [2024-11-29 09:41:25.100463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.497 ms 00:23:57.541 [2024-11-29 09:41:25.100472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.541 [2024-11-29 09:41:25.100499] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:57.541 [2024-11-29 09:41:25.100508] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:24:00.070 [2024-11-29 09:41:27.184930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.070 [2024-11-29 09:41:27.184986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:24:00.070 [2024-11-29 09:41:27.185001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2084.421 ms 00:24:00.070 [2024-11-29 09:41:27.185011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.070 [2024-11-29 09:41:27.193114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.070 [2024-11-29 09:41:27.193159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:00.070 [2024-11-29 09:41:27.193178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.998 ms 00:24:00.070 [2024-11-29 09:41:27.193189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.070 [2024-11-29 09:41:27.193285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.070 [2024-11-29 09:41:27.193296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:00.070 [2024-11-29 09:41:27.193305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:24:00.070 [2024-11-29 09:41:27.193314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.070 [2024-11-29 09:41:27.201551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.070 [2024-11-29 09:41:27.201611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:00.070 [2024-11-29 09:41:27.201623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.185 ms 00:24:00.070 [2024-11-29 09:41:27.201632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.070 [2024-11-29 09:41:27.201678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.070 [2024-11-29 09:41:27.201688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:00.070 [2024-11-29 09:41:27.201697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:24:00.070 [2024-11-29 09:41:27.201710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.070 [2024-11-29 09:41:27.202028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.070 [2024-11-29 09:41:27.202056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:00.070 [2024-11-29 09:41:27.202070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:24:00.070 [2024-11-29 09:41:27.202085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.070 [2024-11-29 09:41:27.202195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.070 [2024-11-29 09:41:27.202210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:00.070 [2024-11-29 09:41:27.202220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:24:00.070 [2024-11-29 09:41:27.202230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.070 [2024-11-29 09:41:27.207396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.070 [2024-11-29 09:41:27.207436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:00.070 [2024-11-29 09:41:27.207445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.146 ms 00:24:00.070 [2024-11-29 09:41:27.207456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.070 [2024-11-29 09:41:27.240512] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:00.070 [2024-11-29 09:41:27.243133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.070 [2024-11-29 09:41:27.243165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:00.070 [2024-11-29 09:41:27.243184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.602 ms 00:24:00.070 [2024-11-29 09:41:27.243192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.070 [2024-11-29 09:41:27.288360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.070 [2024-11-29 09:41:27.288410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:24:00.070 [2024-11-29 09:41:27.288427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.122 ms 00:24:00.070 [2024-11-29 09:41:27.288440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.070 [2024-11-29 09:41:27.288630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.070 [2024-11-29 09:41:27.288641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:00.070 [2024-11-29 09:41:27.288651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:24:00.070 [2024-11-29 09:41:27.288660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.070 [2024-11-29 09:41:27.291502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.070 [2024-11-29 09:41:27.291540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:24:00.070 [2024-11-29 09:41:27.291551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.820 ms 00:24:00.070 [2024-11-29 09:41:27.291560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.070 [2024-11-29 09:41:27.293846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.070 [2024-11-29 09:41:27.293877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:24:00.070 [2024-11-29 09:41:27.293888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.239 ms 00:24:00.070 [2024-11-29 09:41:27.293896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.070 [2024-11-29 09:41:27.294193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.070 [2024-11-29 09:41:27.294212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:00.070 [2024-11-29 09:41:27.294230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:24:00.071 [2024-11-29 09:41:27.294237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.071 [2024-11-29 09:41:27.319535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.071 [2024-11-29 09:41:27.319578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:24:00.071 [2024-11-29 09:41:27.319603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.273 ms 00:24:00.071 [2024-11-29 09:41:27.319612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.071 [2024-11-29 09:41:27.323120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.071 [2024-11-29 09:41:27.323154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:24:00.071 [2024-11-29 09:41:27.323166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.459 ms 00:24:00.071 [2024-11-29 09:41:27.323174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.071 [2024-11-29 09:41:27.326148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.071 [2024-11-29 09:41:27.326180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:24:00.071 [2024-11-29 09:41:27.326190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.937 ms 00:24:00.071 [2024-11-29 09:41:27.326197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.071 [2024-11-29 09:41:27.329434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.071 [2024-11-29 09:41:27.329468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:00.071 [2024-11-29 09:41:27.329482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.200 ms 00:24:00.071 [2024-11-29 09:41:27.329489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.071 [2024-11-29 09:41:27.329528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.071 [2024-11-29 09:41:27.329537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:00.071 [2024-11-29 09:41:27.329549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:00.071 [2024-11-29 09:41:27.329556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.071 [2024-11-29 09:41:27.329664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.071 [2024-11-29 09:41:27.329681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:00.071 [2024-11-29 09:41:27.329691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:24:00.071 [2024-11-29 09:41:27.329701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.071 [2024-11-29 09:41:27.330574] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2238.573 ms, result 0 00:24:00.071 { 00:24:00.071 "name": "ftl0", 00:24:00.071 "uuid": "9bdcd7e5-faf1-4d3d-9a3e-4f068cf2336b" 00:24:00.071 } 00:24:00.071 09:41:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:24:00.071 09:41:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:24:00.071 09:41:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:24:00.071 09:41:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:24:00.071 09:41:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:24:00.071 /dev/nbd0 00:24:00.071 09:41:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:24:00.071 09:41:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:24:00.071 09:41:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:24:00.071 09:41:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:00.071 09:41:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:00.071 09:41:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:24:00.071 09:41:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:24:00.071 09:41:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:00.071 09:41:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:00.071 09:41:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:24:00.071 1+0 records in 00:24:00.071 1+0 records out 00:24:00.071 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224602 s, 18.2 MB/s 00:24:00.071 09:41:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:24:00.071 09:41:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:24:00.071 09:41:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:24:00.329 09:41:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:00.329 09:41:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:24:00.329 09:41:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:24:00.329 [2024-11-29 09:41:27.853564] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:24:00.329 [2024-11-29 09:41:27.853698] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92838 ] 00:24:00.329 [2024-11-29 09:41:27.984267] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:00.329 [2024-11-29 09:41:28.011722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.329 [2024-11-29 09:41:28.030063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:01.711  [2024-11-29T09:41:30.373Z] Copying: 196/1024 [MB] (196 MBps) [2024-11-29T09:41:31.309Z] Copying: 405/1024 [MB] (209 MBps) [2024-11-29T09:41:32.243Z] Copying: 669/1024 [MB] (263 MBps) [2024-11-29T09:41:32.501Z] Copying: 925/1024 [MB] (255 MBps) [2024-11-29T09:41:32.759Z] Copying: 1024/1024 [MB] (average 233 MBps) 00:24:05.033 00:24:05.033 09:41:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:06.933 09:41:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:24:07.191 [2024-11-29 09:41:34.666737] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:24:07.191 [2024-11-29 09:41:34.666855] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92914 ] 00:24:07.191 [2024-11-29 09:41:34.797427] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:07.191 [2024-11-29 09:41:34.823511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.191 [2024-11-29 09:41:34.840225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:08.194  [2024-11-29T09:41:37.291Z] Copying: 32/1024 [MB] (32 MBps) [2024-11-29T09:41:38.224Z] Copying: 68/1024 [MB] (35 MBps) [2024-11-29T09:41:39.158Z] Copying: 98/1024 [MB] (30 MBps) [2024-11-29T09:41:40.091Z] Copying: 127/1024 [MB] (28 MBps) [2024-11-29T09:41:41.024Z] Copying: 157/1024 [MB] (30 MBps) [2024-11-29T09:41:41.963Z] Copying: 188/1024 [MB] (30 MBps) [2024-11-29T09:41:42.952Z] Copying: 218/1024 [MB] (30 MBps) [2024-11-29T09:41:44.326Z] Copying: 244/1024 [MB] (26 MBps) [2024-11-29T09:41:44.890Z] Copying: 274/1024 [MB] (29 MBps) [2024-11-29T09:41:46.260Z] Copying: 305/1024 [MB] (30 MBps) [2024-11-29T09:41:47.194Z] Copying: 334/1024 [MB] (29 MBps) [2024-11-29T09:41:48.128Z] Copying: 364/1024 [MB] (29 MBps) [2024-11-29T09:41:49.061Z] Copying: 394/1024 [MB] (29 MBps) [2024-11-29T09:41:49.993Z] Copying: 424/1024 [MB] (30 MBps) [2024-11-29T09:41:50.926Z] Copying: 454/1024 [MB] (29 MBps) [2024-11-29T09:41:52.315Z] Copying: 488/1024 [MB] (33 MBps) [2024-11-29T09:41:53.248Z] Copying: 519/1024 [MB] (31 MBps) [2024-11-29T09:41:54.181Z] Copying: 549/1024 [MB] (30 MBps) [2024-11-29T09:41:55.124Z] Copying: 580/1024 [MB] (30 MBps) [2024-11-29T09:41:56.065Z] Copying: 609/1024 [MB] (29 MBps) [2024-11-29T09:41:56.997Z] Copying: 640/1024 [MB] (31 MBps) [2024-11-29T09:41:57.979Z] Copying: 671/1024 [MB] (30 MBps) [2024-11-29T09:41:58.910Z] Copying: 702/1024 [MB] (30 MBps) [2024-11-29T09:42:00.282Z] Copying: 732/1024 [MB] (30 MBps) [2024-11-29T09:42:01.216Z] Copying: 763/1024 [MB] (30 MBps) [2024-11-29T09:42:02.149Z] Copying: 793/1024 [MB] (30 MBps) [2024-11-29T09:42:03.084Z] Copying: 824/1024 [MB] (30 MBps) [2024-11-29T09:42:04.017Z] Copying: 858/1024 [MB] (34 MBps) [2024-11-29T09:42:04.950Z] Copying: 890/1024 [MB] (32 MBps) [2024-11-29T09:42:06.321Z] Copying: 922/1024 [MB] (31 MBps) [2024-11-29T09:42:06.888Z] Copying: 953/1024 [MB] (30 MBps) [2024-11-29T09:42:08.262Z] Copying: 984/1024 [MB] (31 MBps) [2024-11-29T09:42:08.262Z] Copying: 1019/1024 [MB] (34 MBps) [2024-11-29T09:42:08.262Z] Copying: 1024/1024 [MB] (average 30 MBps) 00:24:40.536 00:24:40.536 09:42:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:24:40.536 09:42:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:24:40.830 09:42:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:24:40.830 [2024-11-29 09:42:08.544155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.830 [2024-11-29 09:42:08.544197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:40.830 [2024-11-29 09:42:08.544208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:40.830 [2024-11-29 09:42:08.544216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.830 [2024-11-29 09:42:08.544237] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:40.830 [2024-11-29 09:42:08.544670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.830 [2024-11-29 09:42:08.544685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:40.830 [2024-11-29 09:42:08.544699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.417 ms 00:24:40.830 [2024-11-29 09:42:08.544705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.830 [2024-11-29 09:42:08.546348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.830 [2024-11-29 09:42:08.546469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:40.830 [2024-11-29 09:42:08.546484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.621 ms 00:24:40.830 [2024-11-29 09:42:08.546491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.133 [2024-11-29 09:42:08.559417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.133 [2024-11-29 09:42:08.559444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:41.133 [2024-11-29 09:42:08.559455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.905 ms 00:24:41.133 [2024-11-29 09:42:08.559461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.133 [2024-11-29 09:42:08.564459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.133 [2024-11-29 09:42:08.564483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:41.133 [2024-11-29 09:42:08.564493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.966 ms 00:24:41.133 [2024-11-29 09:42:08.564499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.133 [2024-11-29 09:42:08.565490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.133 [2024-11-29 09:42:08.565609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:41.133 [2024-11-29 09:42:08.565630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.935 ms 00:24:41.133 [2024-11-29 09:42:08.565636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.133 [2024-11-29 09:42:08.569355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.133 [2024-11-29 09:42:08.569458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:41.133 [2024-11-29 09:42:08.569474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.689 ms 00:24:41.133 [2024-11-29 09:42:08.569480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.133 [2024-11-29 09:42:08.569576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.133 [2024-11-29 09:42:08.569613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:41.133 [2024-11-29 09:42:08.569634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:24:41.133 [2024-11-29 09:42:08.569642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.133 [2024-11-29 09:42:08.571179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.133 [2024-11-29 09:42:08.571206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:41.133 [2024-11-29 09:42:08.571214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.520 ms 00:24:41.133 [2024-11-29 09:42:08.571220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.133 [2024-11-29 09:42:08.572327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.133 [2024-11-29 09:42:08.572353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:41.133 [2024-11-29 09:42:08.572361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.075 ms 00:24:41.133 [2024-11-29 09:42:08.572367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.133 [2024-11-29 09:42:08.573297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.133 [2024-11-29 09:42:08.573376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:41.133 [2024-11-29 09:42:08.573421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.899 ms 00:24:41.133 [2024-11-29 09:42:08.573463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.133 [2024-11-29 09:42:08.574266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.133 [2024-11-29 09:42:08.574346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:41.133 [2024-11-29 09:42:08.574388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.724 ms 00:24:41.133 [2024-11-29 09:42:08.574396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.133 [2024-11-29 09:42:08.574419] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:41.133 [2024-11-29 09:42:08.574430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:41.133 [2024-11-29 09:42:08.574439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:41.133 [2024-11-29 09:42:08.574445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:41.133 [2024-11-29 09:42:08.574455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:41.133 [2024-11-29 09:42:08.574460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:41.133 [2024-11-29 09:42:08.574468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:41.133 [2024-11-29 09:42:08.574474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:41.133 [2024-11-29 09:42:08.574481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:41.133 [2024-11-29 09:42:08.574487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:41.133 [2024-11-29 09:42:08.574494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:41.133 [2024-11-29 09:42:08.574500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:41.133 [2024-11-29 09:42:08.574507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.574995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.575001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.575007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.575014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.575020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.575026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.575033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.575039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.575046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.575051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.575058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.575064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.575073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.575080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.575087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.575093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.575101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:41.134 [2024-11-29 09:42:08.575114] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:41.134 [2024-11-29 09:42:08.575121] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9bdcd7e5-faf1-4d3d-9a3e-4f068cf2336b 00:24:41.135 [2024-11-29 09:42:08.575127] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:41.135 [2024-11-29 09:42:08.575134] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:41.135 [2024-11-29 09:42:08.575139] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:41.135 [2024-11-29 09:42:08.575146] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:41.135 [2024-11-29 09:42:08.575152] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:41.135 [2024-11-29 09:42:08.575159] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:41.135 [2024-11-29 09:42:08.575165] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:41.135 [2024-11-29 09:42:08.575171] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:41.135 [2024-11-29 09:42:08.575176] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:41.135 [2024-11-29 09:42:08.575183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.135 [2024-11-29 09:42:08.575190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:41.135 [2024-11-29 09:42:08.575197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.764 ms 00:24:41.135 [2024-11-29 09:42:08.575203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.135 [2024-11-29 09:42:08.576470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.135 [2024-11-29 09:42:08.576554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:41.135 [2024-11-29 09:42:08.576567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.249 ms 00:24:41.135 [2024-11-29 09:42:08.576574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.135 [2024-11-29 09:42:08.576670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.135 [2024-11-29 09:42:08.576678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:41.135 [2024-11-29 09:42:08.576686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:24:41.135 [2024-11-29 09:42:08.576692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.135 [2024-11-29 09:42:08.581295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.135 [2024-11-29 09:42:08.581381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:41.135 [2024-11-29 09:42:08.581424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.135 [2024-11-29 09:42:08.581442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.135 [2024-11-29 09:42:08.581529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.135 [2024-11-29 09:42:08.581557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:41.135 [2024-11-29 09:42:08.581731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.135 [2024-11-29 09:42:08.581751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.135 [2024-11-29 09:42:08.581814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.135 [2024-11-29 09:42:08.581839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:41.135 [2024-11-29 09:42:08.581857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.135 [2024-11-29 09:42:08.581897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.135 [2024-11-29 09:42:08.581927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.135 [2024-11-29 09:42:08.581979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:41.135 [2024-11-29 09:42:08.582000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.135 [2024-11-29 09:42:08.582034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.135 [2024-11-29 09:42:08.590157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.135 [2024-11-29 09:42:08.590256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:41.135 [2024-11-29 09:42:08.590299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.135 [2024-11-29 09:42:08.590317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.135 [2024-11-29 09:42:08.596887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.135 [2024-11-29 09:42:08.596997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:41.135 [2024-11-29 09:42:08.597041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.135 [2024-11-29 09:42:08.597059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.135 [2024-11-29 09:42:08.597128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.135 [2024-11-29 09:42:08.597219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:41.135 [2024-11-29 09:42:08.597240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.135 [2024-11-29 09:42:08.597254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.135 [2024-11-29 09:42:08.597295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.135 [2024-11-29 09:42:08.597356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:41.135 [2024-11-29 09:42:08.597379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.135 [2024-11-29 09:42:08.597393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.135 [2024-11-29 09:42:08.597462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.135 [2024-11-29 09:42:08.597482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:41.135 [2024-11-29 09:42:08.597498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.135 [2024-11-29 09:42:08.597512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.135 [2024-11-29 09:42:08.597579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.135 [2024-11-29 09:42:08.597623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:41.135 [2024-11-29 09:42:08.597705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.135 [2024-11-29 09:42:08.597723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.135 [2024-11-29 09:42:08.597765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.135 [2024-11-29 09:42:08.597787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:41.135 [2024-11-29 09:42:08.597803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.135 [2024-11-29 09:42:08.597818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.135 [2024-11-29 09:42:08.597931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.135 [2024-11-29 09:42:08.597952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:41.135 [2024-11-29 09:42:08.597970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.135 [2024-11-29 09:42:08.598015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.135 [2024-11-29 09:42:08.598138] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 53.953 ms, result 0 00:24:41.135 true 00:24:41.135 09:42:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 92710 00:24:41.135 09:42:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid92710 00:24:41.135 09:42:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:24:41.135 [2024-11-29 09:42:08.693788] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:24:41.135 [2024-11-29 09:42:08.694023] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93270 ] 00:24:41.477 [2024-11-29 09:42:08.827014] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:41.477 [2024-11-29 09:42:08.856782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:41.477 [2024-11-29 09:42:08.880274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:42.418  [2024-11-29T09:42:11.083Z] Copying: 189/1024 [MB] (189 MBps) [2024-11-29T09:42:12.017Z] Copying: 378/1024 [MB] (189 MBps) [2024-11-29T09:42:12.952Z] Copying: 633/1024 [MB] (254 MBps) [2024-11-29T09:42:13.519Z] Copying: 891/1024 [MB] (257 MBps) [2024-11-29T09:42:13.779Z] Copying: 1024/1024 [MB] (average 227 MBps) 00:24:46.053 00:24:46.053 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 92710 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:24:46.053 09:42:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:46.053 [2024-11-29 09:42:13.660911] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:24:46.053 [2024-11-29 09:42:13.661034] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93327 ] 00:24:46.311 [2024-11-29 09:42:13.793521] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:46.311 [2024-11-29 09:42:13.819161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:46.311 [2024-11-29 09:42:13.837615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:46.311 [2024-11-29 09:42:13.920098] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:46.311 [2024-11-29 09:42:13.920159] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:46.311 [2024-11-29 09:42:13.981862] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:24:46.311 [2024-11-29 09:42:13.982158] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:24:46.311 [2024-11-29 09:42:13.982383] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:24:46.570 [2024-11-29 09:42:14.152521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.570 [2024-11-29 09:42:14.152556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:46.570 [2024-11-29 09:42:14.152566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:46.570 [2024-11-29 09:42:14.152571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.570 [2024-11-29 09:42:14.152623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.570 [2024-11-29 09:42:14.152631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:46.570 [2024-11-29 09:42:14.152640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:24:46.570 [2024-11-29 09:42:14.152645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.570 [2024-11-29 09:42:14.152660] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:46.570 [2024-11-29 09:42:14.152833] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:46.570 [2024-11-29 09:42:14.152844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.570 [2024-11-29 09:42:14.152854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:46.570 [2024-11-29 09:42:14.152860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.187 ms 00:24:46.570 [2024-11-29 09:42:14.152865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.570 [2024-11-29 09:42:14.153911] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:46.570 [2024-11-29 09:42:14.155907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.570 [2024-11-29 09:42:14.155936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:46.570 [2024-11-29 09:42:14.155944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.997 ms 00:24:46.570 [2024-11-29 09:42:14.155949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.570 [2024-11-29 09:42:14.155997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.570 [2024-11-29 09:42:14.156005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:46.570 [2024-11-29 09:42:14.156013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:24:46.570 [2024-11-29 09:42:14.156019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.570 [2024-11-29 09:42:14.160318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.570 [2024-11-29 09:42:14.160342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:46.570 [2024-11-29 09:42:14.160349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.270 ms 00:24:46.570 [2024-11-29 09:42:14.160355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.570 [2024-11-29 09:42:14.160418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.570 [2024-11-29 09:42:14.160425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:46.570 [2024-11-29 09:42:14.160433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:24:46.570 [2024-11-29 09:42:14.160438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.570 [2024-11-29 09:42:14.160482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.570 [2024-11-29 09:42:14.160489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:46.570 [2024-11-29 09:42:14.160496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:46.570 [2024-11-29 09:42:14.160501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.570 [2024-11-29 09:42:14.160518] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:46.570 [2024-11-29 09:42:14.161692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.570 [2024-11-29 09:42:14.161713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:46.570 [2024-11-29 09:42:14.161727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.176 ms 00:24:46.570 [2024-11-29 09:42:14.161733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.570 [2024-11-29 09:42:14.161754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.570 [2024-11-29 09:42:14.161763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:46.570 [2024-11-29 09:42:14.161769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:46.570 [2024-11-29 09:42:14.161778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.570 [2024-11-29 09:42:14.161792] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:46.570 [2024-11-29 09:42:14.161808] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:46.570 [2024-11-29 09:42:14.161837] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:46.570 [2024-11-29 09:42:14.161850] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:46.570 [2024-11-29 09:42:14.161930] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:46.570 [2024-11-29 09:42:14.161937] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:46.570 [2024-11-29 09:42:14.161945] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:46.570 [2024-11-29 09:42:14.161952] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:46.570 [2024-11-29 09:42:14.161959] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:46.570 [2024-11-29 09:42:14.161965] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:46.570 [2024-11-29 09:42:14.161970] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:46.570 [2024-11-29 09:42:14.161976] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:46.570 [2024-11-29 09:42:14.161986] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:46.570 [2024-11-29 09:42:14.161991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.570 [2024-11-29 09:42:14.161997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:46.570 [2024-11-29 09:42:14.162003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.200 ms 00:24:46.570 [2024-11-29 09:42:14.162011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.570 [2024-11-29 09:42:14.162075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.570 [2024-11-29 09:42:14.162081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:46.570 [2024-11-29 09:42:14.162087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:24:46.570 [2024-11-29 09:42:14.162092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.570 [2024-11-29 09:42:14.162165] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:46.570 [2024-11-29 09:42:14.162172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:46.570 [2024-11-29 09:42:14.162181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:46.570 [2024-11-29 09:42:14.162187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:46.570 [2024-11-29 09:42:14.162193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:46.571 [2024-11-29 09:42:14.162198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:46.571 [2024-11-29 09:42:14.162203] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:46.571 [2024-11-29 09:42:14.162209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:46.571 [2024-11-29 09:42:14.162214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:46.571 [2024-11-29 09:42:14.162220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:46.571 [2024-11-29 09:42:14.162225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:46.571 [2024-11-29 09:42:14.162230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:46.571 [2024-11-29 09:42:14.162240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:46.571 [2024-11-29 09:42:14.162245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:46.571 [2024-11-29 09:42:14.162250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:46.571 [2024-11-29 09:42:14.162255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:46.571 [2024-11-29 09:42:14.162260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:46.571 [2024-11-29 09:42:14.162265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:46.571 [2024-11-29 09:42:14.162270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:46.571 [2024-11-29 09:42:14.162275] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:46.571 [2024-11-29 09:42:14.162279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:46.571 [2024-11-29 09:42:14.162284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:46.571 [2024-11-29 09:42:14.162289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:46.571 [2024-11-29 09:42:14.162294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:46.571 [2024-11-29 09:42:14.162298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:46.571 [2024-11-29 09:42:14.162303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:46.571 [2024-11-29 09:42:14.162308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:46.571 [2024-11-29 09:42:14.162313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:46.571 [2024-11-29 09:42:14.162321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:46.571 [2024-11-29 09:42:14.162326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:46.571 [2024-11-29 09:42:14.162331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:46.571 [2024-11-29 09:42:14.162336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:46.571 [2024-11-29 09:42:14.162341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:46.571 [2024-11-29 09:42:14.162346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:46.571 [2024-11-29 09:42:14.162352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:46.571 [2024-11-29 09:42:14.162357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:46.571 [2024-11-29 09:42:14.162362] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:46.571 [2024-11-29 09:42:14.162368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:46.571 [2024-11-29 09:42:14.162374] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:46.571 [2024-11-29 09:42:14.162379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:46.571 [2024-11-29 09:42:14.162385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:46.571 [2024-11-29 09:42:14.162390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:46.571 [2024-11-29 09:42:14.162396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:46.571 [2024-11-29 09:42:14.162402] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:46.571 [2024-11-29 09:42:14.162411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:46.571 [2024-11-29 09:42:14.162417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:46.571 [2024-11-29 09:42:14.162424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:46.571 [2024-11-29 09:42:14.162433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:46.571 [2024-11-29 09:42:14.162439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:46.571 [2024-11-29 09:42:14.162444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:46.571 [2024-11-29 09:42:14.162450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:46.571 [2024-11-29 09:42:14.162455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:46.571 [2024-11-29 09:42:14.162461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:46.571 [2024-11-29 09:42:14.162468] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:46.571 [2024-11-29 09:42:14.162475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:46.571 [2024-11-29 09:42:14.162484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:46.571 [2024-11-29 09:42:14.162490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:46.571 [2024-11-29 09:42:14.162496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:46.571 [2024-11-29 09:42:14.162502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:46.571 [2024-11-29 09:42:14.162508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:46.571 [2024-11-29 09:42:14.162515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:46.571 [2024-11-29 09:42:14.162521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:46.571 [2024-11-29 09:42:14.162528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:46.571 [2024-11-29 09:42:14.162534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:46.571 [2024-11-29 09:42:14.162540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:46.571 [2024-11-29 09:42:14.162546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:46.571 [2024-11-29 09:42:14.162552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:46.571 [2024-11-29 09:42:14.162557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:46.571 [2024-11-29 09:42:14.162564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:46.571 [2024-11-29 09:42:14.162569] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:46.571 [2024-11-29 09:42:14.162576] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:46.571 [2024-11-29 09:42:14.162598] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:46.571 [2024-11-29 09:42:14.162605] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:46.571 [2024-11-29 09:42:14.162611] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:46.571 [2024-11-29 09:42:14.162618] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:46.571 [2024-11-29 09:42:14.162624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.571 [2024-11-29 09:42:14.162633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:46.571 [2024-11-29 09:42:14.162640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.512 ms 00:24:46.571 [2024-11-29 09:42:14.162646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.571 [2024-11-29 09:42:14.170457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.571 [2024-11-29 09:42:14.170482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:46.571 [2024-11-29 09:42:14.170493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.776 ms 00:24:46.571 [2024-11-29 09:42:14.170502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.571 [2024-11-29 09:42:14.170562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.571 [2024-11-29 09:42:14.170571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:46.571 [2024-11-29 09:42:14.170577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:24:46.571 [2024-11-29 09:42:14.170592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.571 [2024-11-29 09:42:14.199092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.571 [2024-11-29 09:42:14.199287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:46.571 [2024-11-29 09:42:14.199310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.456 ms 00:24:46.571 [2024-11-29 09:42:14.199327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.571 [2024-11-29 09:42:14.199381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.571 [2024-11-29 09:42:14.199393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:46.571 [2024-11-29 09:42:14.199404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:46.571 [2024-11-29 09:42:14.199414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.571 [2024-11-29 09:42:14.199850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.571 [2024-11-29 09:42:14.199876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:46.571 [2024-11-29 09:42:14.199889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.371 ms 00:24:46.571 [2024-11-29 09:42:14.199900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.571 [2024-11-29 09:42:14.200057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.571 [2024-11-29 09:42:14.200076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:46.571 [2024-11-29 09:42:14.200093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:24:46.571 [2024-11-29 09:42:14.200105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.571 [2024-11-29 09:42:14.206204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.571 [2024-11-29 09:42:14.206336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:46.572 [2024-11-29 09:42:14.206353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.074 ms 00:24:46.572 [2024-11-29 09:42:14.206364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.572 [2024-11-29 09:42:14.208783] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:46.572 [2024-11-29 09:42:14.208805] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:46.572 [2024-11-29 09:42:14.208822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.572 [2024-11-29 09:42:14.208832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:46.572 [2024-11-29 09:42:14.208843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.356 ms 00:24:46.572 [2024-11-29 09:42:14.208852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.572 [2024-11-29 09:42:14.221751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.572 [2024-11-29 09:42:14.221786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:46.572 [2024-11-29 09:42:14.221795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.858 ms 00:24:46.572 [2024-11-29 09:42:14.221800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.572 [2024-11-29 09:42:14.223332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.572 [2024-11-29 09:42:14.223359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:46.572 [2024-11-29 09:42:14.223366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.498 ms 00:24:46.572 [2024-11-29 09:42:14.223372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.572 [2024-11-29 09:42:14.224625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.572 [2024-11-29 09:42:14.224649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:46.572 [2024-11-29 09:42:14.224656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.228 ms 00:24:46.572 [2024-11-29 09:42:14.224661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.572 [2024-11-29 09:42:14.224931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.572 [2024-11-29 09:42:14.224948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:46.572 [2024-11-29 09:42:14.224955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.225 ms 00:24:46.572 [2024-11-29 09:42:14.224961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.572 [2024-11-29 09:42:14.238554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.572 [2024-11-29 09:42:14.238603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:46.572 [2024-11-29 09:42:14.238612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.581 ms 00:24:46.572 [2024-11-29 09:42:14.238619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.572 [2024-11-29 09:42:14.244331] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:46.572 [2024-11-29 09:42:14.246395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.572 [2024-11-29 09:42:14.246422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:46.572 [2024-11-29 09:42:14.246431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.737 ms 00:24:46.572 [2024-11-29 09:42:14.246438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.572 [2024-11-29 09:42:14.246482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.572 [2024-11-29 09:42:14.246490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:46.572 [2024-11-29 09:42:14.246499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:46.572 [2024-11-29 09:42:14.246505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.572 [2024-11-29 09:42:14.246576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.572 [2024-11-29 09:42:14.246595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:46.572 [2024-11-29 09:42:14.246603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:24:46.572 [2024-11-29 09:42:14.246609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.572 [2024-11-29 09:42:14.246625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.572 [2024-11-29 09:42:14.246632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:46.572 [2024-11-29 09:42:14.246639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:46.572 [2024-11-29 09:42:14.246647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.572 [2024-11-29 09:42:14.246672] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:46.572 [2024-11-29 09:42:14.246681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.572 [2024-11-29 09:42:14.246689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:46.572 [2024-11-29 09:42:14.246695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:46.572 [2024-11-29 09:42:14.246701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.572 [2024-11-29 09:42:14.249730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.572 [2024-11-29 09:42:14.249756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:46.572 [2024-11-29 09:42:14.249765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.015 ms 00:24:46.572 [2024-11-29 09:42:14.249775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.572 [2024-11-29 09:42:14.249827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.572 [2024-11-29 09:42:14.249834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:46.572 [2024-11-29 09:42:14.249840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:24:46.572 [2024-11-29 09:42:14.249845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.572 [2024-11-29 09:42:14.250614] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 97.768 ms, result 0 00:24:47.959  [2024-11-29T09:42:16.624Z] Copying: 31/1024 [MB] (31 MBps) [2024-11-29T09:42:17.562Z] Copying: 50/1024 [MB] (18 MBps) [2024-11-29T09:42:18.500Z] Copying: 67/1024 [MB] (17 MBps) [2024-11-29T09:42:19.444Z] Copying: 86/1024 [MB] (18 MBps) [2024-11-29T09:42:20.387Z] Copying: 102/1024 [MB] (16 MBps) [2024-11-29T09:42:21.331Z] Copying: 126/1024 [MB] (23 MBps) [2024-11-29T09:42:22.276Z] Copying: 143/1024 [MB] (16 MBps) [2024-11-29T09:42:23.663Z] Copying: 162/1024 [MB] (19 MBps) [2024-11-29T09:42:24.604Z] Copying: 179/1024 [MB] (17 MBps) [2024-11-29T09:42:25.545Z] Copying: 200/1024 [MB] (21 MBps) [2024-11-29T09:42:26.489Z] Copying: 220/1024 [MB] (19 MBps) [2024-11-29T09:42:27.432Z] Copying: 241/1024 [MB] (20 MBps) [2024-11-29T09:42:28.374Z] Copying: 263/1024 [MB] (22 MBps) [2024-11-29T09:42:29.342Z] Copying: 278/1024 [MB] (15 MBps) [2024-11-29T09:42:30.310Z] Copying: 298/1024 [MB] (20 MBps) [2024-11-29T09:42:31.695Z] Copying: 316/1024 [MB] (17 MBps) [2024-11-29T09:42:32.267Z] Copying: 338/1024 [MB] (22 MBps) [2024-11-29T09:42:33.657Z] Copying: 358/1024 [MB] (20 MBps) [2024-11-29T09:42:34.603Z] Copying: 375/1024 [MB] (16 MBps) [2024-11-29T09:42:35.546Z] Copying: 385/1024 [MB] (10 MBps) [2024-11-29T09:42:36.490Z] Copying: 405280/1048576 [kB] (10096 kBps) [2024-11-29T09:42:37.435Z] Copying: 405/1024 [MB] (10 MBps) [2024-11-29T09:42:38.379Z] Copying: 416/1024 [MB] (10 MBps) [2024-11-29T09:42:39.344Z] Copying: 426/1024 [MB] (10 MBps) [2024-11-29T09:42:40.288Z] Copying: 437/1024 [MB] (11 MBps) [2024-11-29T09:42:41.677Z] Copying: 448/1024 [MB] (10 MBps) [2024-11-29T09:42:42.620Z] Copying: 459/1024 [MB] (10 MBps) [2024-11-29T09:42:43.561Z] Copying: 469/1024 [MB] (10 MBps) [2024-11-29T09:42:44.510Z] Copying: 480/1024 [MB] (10 MBps) [2024-11-29T09:42:45.452Z] Copying: 491/1024 [MB] (10 MBps) [2024-11-29T09:42:46.397Z] Copying: 501/1024 [MB] (10 MBps) [2024-11-29T09:42:47.451Z] Copying: 512/1024 [MB] (10 MBps) [2024-11-29T09:42:48.418Z] Copying: 523/1024 [MB] (11 MBps) [2024-11-29T09:42:49.358Z] Copying: 534/1024 [MB] (10 MBps) [2024-11-29T09:42:50.301Z] Copying: 545/1024 [MB] (11 MBps) [2024-11-29T09:42:51.688Z] Copying: 556/1024 [MB] (11 MBps) [2024-11-29T09:42:52.632Z] Copying: 567/1024 [MB] (10 MBps) [2024-11-29T09:42:53.594Z] Copying: 578/1024 [MB] (11 MBps) [2024-11-29T09:42:54.538Z] Copying: 596/1024 [MB] (18 MBps) [2024-11-29T09:42:55.484Z] Copying: 613/1024 [MB] (16 MBps) [2024-11-29T09:42:56.424Z] Copying: 630/1024 [MB] (16 MBps) [2024-11-29T09:42:57.383Z] Copying: 649/1024 [MB] (19 MBps) [2024-11-29T09:42:58.326Z] Copying: 674/1024 [MB] (24 MBps) [2024-11-29T09:42:59.272Z] Copying: 690/1024 [MB] (16 MBps) [2024-11-29T09:43:00.655Z] Copying: 708/1024 [MB] (17 MBps) [2024-11-29T09:43:01.593Z] Copying: 732/1024 [MB] (24 MBps) [2024-11-29T09:43:02.537Z] Copying: 761/1024 [MB] (29 MBps) [2024-11-29T09:43:03.479Z] Copying: 782/1024 [MB] (21 MBps) [2024-11-29T09:43:04.423Z] Copying: 804/1024 [MB] (21 MBps) [2024-11-29T09:43:05.367Z] Copying: 821/1024 [MB] (17 MBps) [2024-11-29T09:43:06.307Z] Copying: 838/1024 [MB] (16 MBps) [2024-11-29T09:43:07.694Z] Copying: 861/1024 [MB] (23 MBps) [2024-11-29T09:43:08.265Z] Copying: 886/1024 [MB] (24 MBps) [2024-11-29T09:43:09.649Z] Copying: 908/1024 [MB] (22 MBps) [2024-11-29T09:43:10.588Z] Copying: 927/1024 [MB] (19 MBps) [2024-11-29T09:43:11.527Z] Copying: 950/1024 [MB] (23 MBps) [2024-11-29T09:43:12.470Z] Copying: 990/1024 [MB] (39 MBps) [2024-11-29T09:43:13.412Z] Copying: 1013/1024 [MB] (23 MBps) [2024-11-29T09:43:13.983Z] Copying: 1023/1024 [MB] (10 MBps) [2024-11-29T09:43:13.984Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-11-29 09:43:13.747077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.258 [2024-11-29 09:43:13.747240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:46.258 [2024-11-29 09:43:13.747262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:46.258 [2024-11-29 09:43:13.747277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.258 [2024-11-29 09:43:13.750545] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:46.258 [2024-11-29 09:43:13.751747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.258 [2024-11-29 09:43:13.751775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:46.258 [2024-11-29 09:43:13.751786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.073 ms 00:25:46.258 [2024-11-29 09:43:13.751794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.258 [2024-11-29 09:43:13.762896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.258 [2024-11-29 09:43:13.762929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:46.258 [2024-11-29 09:43:13.762939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.519 ms 00:25:46.258 [2024-11-29 09:43:13.762947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.258 [2024-11-29 09:43:13.783301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.258 [2024-11-29 09:43:13.783332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:46.258 [2024-11-29 09:43:13.783343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.334 ms 00:25:46.258 [2024-11-29 09:43:13.783357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.258 [2024-11-29 09:43:13.789504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.258 [2024-11-29 09:43:13.789530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:46.258 [2024-11-29 09:43:13.789545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.122 ms 00:25:46.258 [2024-11-29 09:43:13.789552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.258 [2024-11-29 09:43:13.791961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.258 [2024-11-29 09:43:13.791993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:46.258 [2024-11-29 09:43:13.792002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.343 ms 00:25:46.258 [2024-11-29 09:43:13.792009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.258 [2024-11-29 09:43:13.795720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.258 [2024-11-29 09:43:13.795751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:46.258 [2024-11-29 09:43:13.795761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.682 ms 00:25:46.258 [2024-11-29 09:43:13.795768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.520 [2024-11-29 09:43:13.995264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.520 [2024-11-29 09:43:13.995325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:46.520 [2024-11-29 09:43:13.995338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 199.461 ms 00:25:46.520 [2024-11-29 09:43:13.995351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.520 [2024-11-29 09:43:13.997378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.520 [2024-11-29 09:43:13.997504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:46.520 [2024-11-29 09:43:13.997520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.008 ms 00:25:46.520 [2024-11-29 09:43:13.997527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.520 [2024-11-29 09:43:13.999112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.520 [2024-11-29 09:43:13.999143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:46.520 [2024-11-29 09:43:13.999152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.549 ms 00:25:46.520 [2024-11-29 09:43:13.999159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.520 [2024-11-29 09:43:14.000428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.520 [2024-11-29 09:43:14.000458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:46.520 [2024-11-29 09:43:14.000468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.241 ms 00:25:46.520 [2024-11-29 09:43:14.000475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.520 [2024-11-29 09:43:14.001527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.520 [2024-11-29 09:43:14.001655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:46.520 [2024-11-29 09:43:14.001706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.002 ms 00:25:46.520 [2024-11-29 09:43:14.001716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.520 [2024-11-29 09:43:14.001742] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:46.520 [2024-11-29 09:43:14.001755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 109056 / 261120 wr_cnt: 1 state: open 00:25:46.520 [2024-11-29 09:43:14.001770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:46.520 [2024-11-29 09:43:14.001778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:46.520 [2024-11-29 09:43:14.001786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:46.520 [2024-11-29 09:43:14.001793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:46.520 [2024-11-29 09:43:14.001801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:46.520 [2024-11-29 09:43:14.001808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:46.520 [2024-11-29 09:43:14.001816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:46.520 [2024-11-29 09:43:14.001824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:46.520 [2024-11-29 09:43:14.001831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:46.520 [2024-11-29 09:43:14.001839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:46.520 [2024-11-29 09:43:14.001846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:46.520 [2024-11-29 09:43:14.001854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:46.520 [2024-11-29 09:43:14.001861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:46.520 [2024-11-29 09:43:14.001868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:46.520 [2024-11-29 09:43:14.001876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:46.520 [2024-11-29 09:43:14.001883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:46.520 [2024-11-29 09:43:14.001891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:46.520 [2024-11-29 09:43:14.001928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:46.520 [2024-11-29 09:43:14.001935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.001942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.001949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.001957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.001964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.001972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.001979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.001986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.001993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:46.521 [2024-11-29 09:43:14.002530] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:46.521 [2024-11-29 09:43:14.002537] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9bdcd7e5-faf1-4d3d-9a3e-4f068cf2336b 00:25:46.521 [2024-11-29 09:43:14.002545] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 109056 00:25:46.521 [2024-11-29 09:43:14.002556] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 110016 00:25:46.522 [2024-11-29 09:43:14.002567] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 109056 00:25:46.522 [2024-11-29 09:43:14.002575] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0088 00:25:46.522 [2024-11-29 09:43:14.002582] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:46.522 [2024-11-29 09:43:14.002613] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:46.522 [2024-11-29 09:43:14.002625] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:46.522 [2024-11-29 09:43:14.002631] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:46.522 [2024-11-29 09:43:14.002637] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:46.522 [2024-11-29 09:43:14.002644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.522 [2024-11-29 09:43:14.002654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:46.522 [2024-11-29 09:43:14.002663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.904 ms 00:25:46.522 [2024-11-29 09:43:14.002670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.522 [2024-11-29 09:43:14.004061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.522 [2024-11-29 09:43:14.004154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:46.522 [2024-11-29 09:43:14.004174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.375 ms 00:25:46.522 [2024-11-29 09:43:14.004182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.522 [2024-11-29 09:43:14.004260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.522 [2024-11-29 09:43:14.004269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:46.522 [2024-11-29 09:43:14.004277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:25:46.522 [2024-11-29 09:43:14.004283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.522 [2024-11-29 09:43:14.009016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.522 [2024-11-29 09:43:14.009047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:46.522 [2024-11-29 09:43:14.009056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.522 [2024-11-29 09:43:14.009067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.522 [2024-11-29 09:43:14.009115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.522 [2024-11-29 09:43:14.009122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:46.522 [2024-11-29 09:43:14.009130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.522 [2024-11-29 09:43:14.009137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.522 [2024-11-29 09:43:14.009173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.522 [2024-11-29 09:43:14.009182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:46.522 [2024-11-29 09:43:14.009193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.522 [2024-11-29 09:43:14.009200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.522 [2024-11-29 09:43:14.009216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.522 [2024-11-29 09:43:14.009223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:46.522 [2024-11-29 09:43:14.009230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.522 [2024-11-29 09:43:14.009237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.522 [2024-11-29 09:43:14.017923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.522 [2024-11-29 09:43:14.017961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:46.522 [2024-11-29 09:43:14.017972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.522 [2024-11-29 09:43:14.017979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.522 [2024-11-29 09:43:14.024798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.522 [2024-11-29 09:43:14.024836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:46.522 [2024-11-29 09:43:14.024847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.522 [2024-11-29 09:43:14.024854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.522 [2024-11-29 09:43:14.024896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.522 [2024-11-29 09:43:14.024905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:46.522 [2024-11-29 09:43:14.024913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.522 [2024-11-29 09:43:14.024927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.522 [2024-11-29 09:43:14.024950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.522 [2024-11-29 09:43:14.024961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:46.522 [2024-11-29 09:43:14.024972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.522 [2024-11-29 09:43:14.024982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.522 [2024-11-29 09:43:14.025040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.522 [2024-11-29 09:43:14.025049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:46.522 [2024-11-29 09:43:14.025057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.522 [2024-11-29 09:43:14.025064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.522 [2024-11-29 09:43:14.025094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.522 [2024-11-29 09:43:14.025103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:46.522 [2024-11-29 09:43:14.025114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.522 [2024-11-29 09:43:14.025121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.522 [2024-11-29 09:43:14.025161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.522 [2024-11-29 09:43:14.025169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:46.522 [2024-11-29 09:43:14.025177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.522 [2024-11-29 09:43:14.025184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.522 [2024-11-29 09:43:14.025225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.522 [2024-11-29 09:43:14.025237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:46.522 [2024-11-29 09:43:14.025245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.522 [2024-11-29 09:43:14.025253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.522 [2024-11-29 09:43:14.025364] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 280.698 ms, result 0 00:25:47.093 00:25:47.093 00:25:47.352 09:43:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:25:49.263 09:43:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:49.523 [2024-11-29 09:43:17.033414] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:25:49.523 [2024-11-29 09:43:17.033542] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93973 ] 00:25:49.523 [2024-11-29 09:43:17.165720] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:49.523 [2024-11-29 09:43:17.196693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:49.523 [2024-11-29 09:43:17.219221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:49.783 [2024-11-29 09:43:17.325705] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:49.783 [2024-11-29 09:43:17.325782] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:49.783 [2024-11-29 09:43:17.487821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.783 [2024-11-29 09:43:17.487885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:49.783 [2024-11-29 09:43:17.487901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:49.783 [2024-11-29 09:43:17.487910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.783 [2024-11-29 09:43:17.487974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.783 [2024-11-29 09:43:17.487989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:49.783 [2024-11-29 09:43:17.487998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:25:49.783 [2024-11-29 09:43:17.488008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.783 [2024-11-29 09:43:17.488030] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:49.783 [2024-11-29 09:43:17.488298] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:49.783 [2024-11-29 09:43:17.488319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.783 [2024-11-29 09:43:17.488335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:49.783 [2024-11-29 09:43:17.488344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:25:49.783 [2024-11-29 09:43:17.488352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.783 [2024-11-29 09:43:17.490121] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:49.783 [2024-11-29 09:43:17.494116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.783 [2024-11-29 09:43:17.494169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:49.783 [2024-11-29 09:43:17.494191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.999 ms 00:25:49.783 [2024-11-29 09:43:17.494200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.783 [2024-11-29 09:43:17.494291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.783 [2024-11-29 09:43:17.494307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:49.783 [2024-11-29 09:43:17.494316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:25:49.783 [2024-11-29 09:43:17.494323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.783 [2024-11-29 09:43:17.502900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.783 [2024-11-29 09:43:17.502951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:49.783 [2024-11-29 09:43:17.502962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.534 ms 00:25:49.783 [2024-11-29 09:43:17.502970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.783 [2024-11-29 09:43:17.503081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.783 [2024-11-29 09:43:17.503091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:49.783 [2024-11-29 09:43:17.503101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:25:49.783 [2024-11-29 09:43:17.503116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.783 [2024-11-29 09:43:17.503179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.783 [2024-11-29 09:43:17.503193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:49.783 [2024-11-29 09:43:17.503204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:49.783 [2024-11-29 09:43:17.503211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.783 [2024-11-29 09:43:17.503233] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:50.044 [2024-11-29 09:43:17.505385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.044 [2024-11-29 09:43:17.505431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:50.044 [2024-11-29 09:43:17.505442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.157 ms 00:25:50.044 [2024-11-29 09:43:17.505450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.044 [2024-11-29 09:43:17.505486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.044 [2024-11-29 09:43:17.505494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:50.044 [2024-11-29 09:43:17.505510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:50.044 [2024-11-29 09:43:17.505518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.044 [2024-11-29 09:43:17.505562] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:50.044 [2024-11-29 09:43:17.505584] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:50.044 [2024-11-29 09:43:17.505644] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:50.044 [2024-11-29 09:43:17.505666] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:50.044 [2024-11-29 09:43:17.505772] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:50.044 [2024-11-29 09:43:17.505789] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:50.045 [2024-11-29 09:43:17.505801] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:50.045 [2024-11-29 09:43:17.505811] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:50.045 [2024-11-29 09:43:17.505820] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:50.045 [2024-11-29 09:43:17.505827] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:50.045 [2024-11-29 09:43:17.505835] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:50.045 [2024-11-29 09:43:17.505842] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:50.045 [2024-11-29 09:43:17.505850] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:50.045 [2024-11-29 09:43:17.505858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.045 [2024-11-29 09:43:17.505866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:50.045 [2024-11-29 09:43:17.505874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:25:50.045 [2024-11-29 09:43:17.505888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.045 [2024-11-29 09:43:17.505972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.045 [2024-11-29 09:43:17.505985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:50.045 [2024-11-29 09:43:17.505993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:25:50.045 [2024-11-29 09:43:17.506001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.045 [2024-11-29 09:43:17.506099] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:50.045 [2024-11-29 09:43:17.506110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:50.045 [2024-11-29 09:43:17.506120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:50.045 [2024-11-29 09:43:17.506138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:50.045 [2024-11-29 09:43:17.506149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:50.045 [2024-11-29 09:43:17.506157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:50.045 [2024-11-29 09:43:17.506171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:50.045 [2024-11-29 09:43:17.506181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:50.045 [2024-11-29 09:43:17.506190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:50.045 [2024-11-29 09:43:17.506197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:50.045 [2024-11-29 09:43:17.506210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:50.045 [2024-11-29 09:43:17.506218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:50.045 [2024-11-29 09:43:17.506225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:50.045 [2024-11-29 09:43:17.506233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:50.045 [2024-11-29 09:43:17.506244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:50.045 [2024-11-29 09:43:17.506252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:50.045 [2024-11-29 09:43:17.506263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:50.045 [2024-11-29 09:43:17.506274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:50.045 [2024-11-29 09:43:17.506286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:50.045 [2024-11-29 09:43:17.506299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:50.045 [2024-11-29 09:43:17.506312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:50.045 [2024-11-29 09:43:17.506320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:50.045 [2024-11-29 09:43:17.506329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:50.045 [2024-11-29 09:43:17.506337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:50.045 [2024-11-29 09:43:17.506345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:50.045 [2024-11-29 09:43:17.506352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:50.045 [2024-11-29 09:43:17.506363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:50.045 [2024-11-29 09:43:17.506371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:50.045 [2024-11-29 09:43:17.506382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:50.045 [2024-11-29 09:43:17.506394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:50.045 [2024-11-29 09:43:17.506405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:50.045 [2024-11-29 09:43:17.506417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:50.045 [2024-11-29 09:43:17.506426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:50.045 [2024-11-29 09:43:17.506433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:50.045 [2024-11-29 09:43:17.506442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:50.045 [2024-11-29 09:43:17.506454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:50.045 [2024-11-29 09:43:17.506462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:50.045 [2024-11-29 09:43:17.506469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:50.045 [2024-11-29 09:43:17.506478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:50.045 [2024-11-29 09:43:17.506489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:50.045 [2024-11-29 09:43:17.506501] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:50.045 [2024-11-29 09:43:17.506512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:50.045 [2024-11-29 09:43:17.506527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:50.045 [2024-11-29 09:43:17.506534] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:50.045 [2024-11-29 09:43:17.506542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:50.045 [2024-11-29 09:43:17.506550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:50.045 [2024-11-29 09:43:17.506558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:50.045 [2024-11-29 09:43:17.506567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:50.045 [2024-11-29 09:43:17.506574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:50.045 [2024-11-29 09:43:17.506581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:50.045 [2024-11-29 09:43:17.506606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:50.045 [2024-11-29 09:43:17.506613] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:50.045 [2024-11-29 09:43:17.506620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:50.045 [2024-11-29 09:43:17.506630] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:50.045 [2024-11-29 09:43:17.506640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:50.045 [2024-11-29 09:43:17.506649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:50.045 [2024-11-29 09:43:17.506657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:50.045 [2024-11-29 09:43:17.506664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:50.045 [2024-11-29 09:43:17.506674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:50.045 [2024-11-29 09:43:17.506683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:50.045 [2024-11-29 09:43:17.506691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:50.045 [2024-11-29 09:43:17.506700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:50.045 [2024-11-29 09:43:17.506707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:50.045 [2024-11-29 09:43:17.506714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:50.045 [2024-11-29 09:43:17.506731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:50.045 [2024-11-29 09:43:17.506738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:50.045 [2024-11-29 09:43:17.506746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:50.045 [2024-11-29 09:43:17.506753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:50.045 [2024-11-29 09:43:17.506761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:50.045 [2024-11-29 09:43:17.506768] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:50.045 [2024-11-29 09:43:17.506776] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:50.045 [2024-11-29 09:43:17.506784] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:50.045 [2024-11-29 09:43:17.506791] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:50.045 [2024-11-29 09:43:17.506798] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:50.045 [2024-11-29 09:43:17.506808] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:50.045 [2024-11-29 09:43:17.506816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.045 [2024-11-29 09:43:17.506825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:50.045 [2024-11-29 09:43:17.506833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.785 ms 00:25:50.045 [2024-11-29 09:43:17.506844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.045 [2024-11-29 09:43:17.521734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.045 [2024-11-29 09:43:17.521782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:50.045 [2024-11-29 09:43:17.521801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.823 ms 00:25:50.045 [2024-11-29 09:43:17.521810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.046 [2024-11-29 09:43:17.521902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.046 [2024-11-29 09:43:17.521912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:50.046 [2024-11-29 09:43:17.521922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:25:50.046 [2024-11-29 09:43:17.521931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.046 [2024-11-29 09:43:17.548380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.046 [2024-11-29 09:43:17.548459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:50.046 [2024-11-29 09:43:17.548481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.379 ms 00:25:50.046 [2024-11-29 09:43:17.548496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.046 [2024-11-29 09:43:17.548584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.046 [2024-11-29 09:43:17.548641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:50.046 [2024-11-29 09:43:17.548665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:25:50.046 [2024-11-29 09:43:17.548690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.046 [2024-11-29 09:43:17.549333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.046 [2024-11-29 09:43:17.549395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:50.046 [2024-11-29 09:43:17.549414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:25:50.046 [2024-11-29 09:43:17.549430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.046 [2024-11-29 09:43:17.549739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.046 [2024-11-29 09:43:17.549760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:50.046 [2024-11-29 09:43:17.549776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:25:50.046 [2024-11-29 09:43:17.549798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.046 [2024-11-29 09:43:17.558212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.046 [2024-11-29 09:43:17.558260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:50.046 [2024-11-29 09:43:17.558271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.372 ms 00:25:50.046 [2024-11-29 09:43:17.558292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.046 [2024-11-29 09:43:17.562264] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:25:50.046 [2024-11-29 09:43:17.562318] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:50.046 [2024-11-29 09:43:17.562335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.046 [2024-11-29 09:43:17.562344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:50.046 [2024-11-29 09:43:17.562353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.952 ms 00:25:50.046 [2024-11-29 09:43:17.562361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.046 [2024-11-29 09:43:17.580648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.046 [2024-11-29 09:43:17.580697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:50.046 [2024-11-29 09:43:17.580709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.222 ms 00:25:50.046 [2024-11-29 09:43:17.580718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.046 [2024-11-29 09:43:17.583817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.046 [2024-11-29 09:43:17.583868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:50.046 [2024-11-29 09:43:17.583878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.040 ms 00:25:50.046 [2024-11-29 09:43:17.583885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.046 [2024-11-29 09:43:17.586511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.046 [2024-11-29 09:43:17.586558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:50.046 [2024-11-29 09:43:17.586569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.577 ms 00:25:50.046 [2024-11-29 09:43:17.586576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.046 [2024-11-29 09:43:17.586942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.046 [2024-11-29 09:43:17.586956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:50.046 [2024-11-29 09:43:17.586970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:25:50.046 [2024-11-29 09:43:17.586980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.046 [2024-11-29 09:43:17.611215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.046 [2024-11-29 09:43:17.611287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:50.046 [2024-11-29 09:43:17.611304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.215 ms 00:25:50.046 [2024-11-29 09:43:17.611314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.046 [2024-11-29 09:43:17.619856] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:50.046 [2024-11-29 09:43:17.622953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.046 [2024-11-29 09:43:17.623000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:50.046 [2024-11-29 09:43:17.623012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.583 ms 00:25:50.046 [2024-11-29 09:43:17.623032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.046 [2024-11-29 09:43:17.623115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.046 [2024-11-29 09:43:17.623127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:50.046 [2024-11-29 09:43:17.623137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:25:50.046 [2024-11-29 09:43:17.623145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.046 [2024-11-29 09:43:17.624969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.046 [2024-11-29 09:43:17.625147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:50.046 [2024-11-29 09:43:17.625167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.786 ms 00:25:50.046 [2024-11-29 09:43:17.625177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.046 [2024-11-29 09:43:17.625216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.046 [2024-11-29 09:43:17.625226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:50.046 [2024-11-29 09:43:17.625235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:50.046 [2024-11-29 09:43:17.625243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.046 [2024-11-29 09:43:17.625284] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:50.046 [2024-11-29 09:43:17.625295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.046 [2024-11-29 09:43:17.625312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:50.046 [2024-11-29 09:43:17.625321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:50.046 [2024-11-29 09:43:17.625329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.046 [2024-11-29 09:43:17.631177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.046 [2024-11-29 09:43:17.631227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:50.046 [2024-11-29 09:43:17.631246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.827 ms 00:25:50.046 [2024-11-29 09:43:17.631255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.046 [2024-11-29 09:43:17.631340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.046 [2024-11-29 09:43:17.631351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:50.046 [2024-11-29 09:43:17.631364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:25:50.046 [2024-11-29 09:43:17.631372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.046 [2024-11-29 09:43:17.632743] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 144.432 ms, result 0 00:25:51.444  [2024-11-29T09:43:20.107Z] Copying: 984/1048576 [kB] (984 kBps) [2024-11-29T09:43:21.051Z] Copying: 5148/1048576 [kB] (4164 kBps) [2024-11-29T09:43:22.060Z] Copying: 27/1024 [MB] (22 MBps) [2024-11-29T09:43:23.017Z] Copying: 57/1024 [MB] (30 MBps) [2024-11-29T09:43:23.955Z] Copying: 86/1024 [MB] (28 MBps) [2024-11-29T09:43:24.889Z] Copying: 120/1024 [MB] (34 MBps) [2024-11-29T09:43:25.821Z] Copying: 173/1024 [MB] (52 MBps) [2024-11-29T09:43:27.195Z] Copying: 225/1024 [MB] (51 MBps) [2024-11-29T09:43:28.128Z] Copying: 278/1024 [MB] (53 MBps) [2024-11-29T09:43:29.062Z] Copying: 329/1024 [MB] (50 MBps) [2024-11-29T09:43:29.996Z] Copying: 383/1024 [MB] (54 MBps) [2024-11-29T09:43:30.987Z] Copying: 437/1024 [MB] (54 MBps) [2024-11-29T09:43:31.918Z] Copying: 491/1024 [MB] (53 MBps) [2024-11-29T09:43:32.852Z] Copying: 544/1024 [MB] (53 MBps) [2024-11-29T09:43:34.221Z] Copying: 596/1024 [MB] (51 MBps) [2024-11-29T09:43:35.152Z] Copying: 650/1024 [MB] (54 MBps) [2024-11-29T09:43:36.087Z] Copying: 704/1024 [MB] (54 MBps) [2024-11-29T09:43:37.031Z] Copying: 757/1024 [MB] (53 MBps) [2024-11-29T09:43:37.973Z] Copying: 784/1024 [MB] (27 MBps) [2024-11-29T09:43:38.920Z] Copying: 810/1024 [MB] (25 MBps) [2024-11-29T09:43:39.861Z] Copying: 842/1024 [MB] (32 MBps) [2024-11-29T09:43:41.256Z] Copying: 879/1024 [MB] (37 MBps) [2024-11-29T09:43:41.829Z] Copying: 898/1024 [MB] (18 MBps) [2024-11-29T09:43:43.213Z] Copying: 924/1024 [MB] (25 MBps) [2024-11-29T09:43:44.155Z] Copying: 952/1024 [MB] (28 MBps) [2024-11-29T09:43:45.100Z] Copying: 967/1024 [MB] (14 MBps) [2024-11-29T09:43:46.046Z] Copying: 992/1024 [MB] (25 MBps) [2024-11-29T09:43:46.316Z] Copying: 1016/1024 [MB] (23 MBps) [2024-11-29T09:43:46.576Z] Copying: 1024/1024 [MB] (average 36 MBps)[2024-11-29 09:43:46.565925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.850 [2024-11-29 09:43:46.565985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:18.850 [2024-11-29 09:43:46.566005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:18.850 [2024-11-29 09:43:46.566013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.850 [2024-11-29 09:43:46.566036] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:18.850 [2024-11-29 09:43:46.566622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.850 [2024-11-29 09:43:46.566642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:18.850 [2024-11-29 09:43:46.566652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.568 ms 00:26:18.850 [2024-11-29 09:43:46.566667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.850 [2024-11-29 09:43:46.566954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.850 [2024-11-29 09:43:46.566965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:18.850 [2024-11-29 09:43:46.566974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:26:18.850 [2024-11-29 09:43:46.566982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.113 [2024-11-29 09:43:46.579255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.113 [2024-11-29 09:43:46.579410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:19.113 [2024-11-29 09:43:46.579430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.254 ms 00:26:19.113 [2024-11-29 09:43:46.579438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.113 [2024-11-29 09:43:46.586151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.113 [2024-11-29 09:43:46.586185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:19.113 [2024-11-29 09:43:46.586201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.673 ms 00:26:19.113 [2024-11-29 09:43:46.586209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.113 [2024-11-29 09:43:46.588559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.113 [2024-11-29 09:43:46.588609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:19.113 [2024-11-29 09:43:46.588619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.316 ms 00:26:19.113 [2024-11-29 09:43:46.588626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.113 [2024-11-29 09:43:46.592383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.113 [2024-11-29 09:43:46.592427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:19.113 [2024-11-29 09:43:46.592447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.727 ms 00:26:19.113 [2024-11-29 09:43:46.592460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.113 [2024-11-29 09:43:46.596277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.113 [2024-11-29 09:43:46.596313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:19.113 [2024-11-29 09:43:46.596323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.789 ms 00:26:19.113 [2024-11-29 09:43:46.596330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.113 [2024-11-29 09:43:46.598908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.113 [2024-11-29 09:43:46.598940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:19.113 [2024-11-29 09:43:46.598958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.563 ms 00:26:19.113 [2024-11-29 09:43:46.598965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.113 [2024-11-29 09:43:46.600809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.113 [2024-11-29 09:43:46.600840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:19.113 [2024-11-29 09:43:46.600848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.814 ms 00:26:19.113 [2024-11-29 09:43:46.600855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.113 [2024-11-29 09:43:46.602102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.113 [2024-11-29 09:43:46.602134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:19.113 [2024-11-29 09:43:46.602143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.220 ms 00:26:19.113 [2024-11-29 09:43:46.602150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.113 [2024-11-29 09:43:46.603216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.113 [2024-11-29 09:43:46.603247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:19.113 [2024-11-29 09:43:46.603256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.014 ms 00:26:19.113 [2024-11-29 09:43:46.603263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.113 [2024-11-29 09:43:46.603290] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:19.113 [2024-11-29 09:43:46.603304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:19.113 [2024-11-29 09:43:46.603320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:26:19.113 [2024-11-29 09:43:46.603328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.603994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.604001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.604008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.604015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.604022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.604029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.604036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.604044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.604051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.604059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.604066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:19.113 [2024-11-29 09:43:46.604081] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:19.113 [2024-11-29 09:43:46.604092] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9bdcd7e5-faf1-4d3d-9a3e-4f068cf2336b 00:26:19.113 [2024-11-29 09:43:46.604104] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:26:19.113 [2024-11-29 09:43:46.604112] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 155584 00:26:19.113 [2024-11-29 09:43:46.604118] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 153600 00:26:19.113 [2024-11-29 09:43:46.604126] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0129 00:26:19.113 [2024-11-29 09:43:46.604133] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:19.113 [2024-11-29 09:43:46.604140] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:19.113 [2024-11-29 09:43:46.604148] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:19.113 [2024-11-29 09:43:46.604154] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:19.113 [2024-11-29 09:43:46.604160] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:19.113 [2024-11-29 09:43:46.604168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.113 [2024-11-29 09:43:46.604175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:19.113 [2024-11-29 09:43:46.604190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.878 ms 00:26:19.113 [2024-11-29 09:43:46.604199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.113 [2024-11-29 09:43:46.605773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.113 [2024-11-29 09:43:46.605799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:19.113 [2024-11-29 09:43:46.605808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.559 ms 00:26:19.113 [2024-11-29 09:43:46.605816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.113 [2024-11-29 09:43:46.605896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.113 [2024-11-29 09:43:46.605908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:19.113 [2024-11-29 09:43:46.605916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:26:19.113 [2024-11-29 09:43:46.605923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.113 [2024-11-29 09:43:46.611035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:19.113 [2024-11-29 09:43:46.611069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:19.113 [2024-11-29 09:43:46.611078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:19.113 [2024-11-29 09:43:46.611085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.113 [2024-11-29 09:43:46.611140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:19.113 [2024-11-29 09:43:46.611151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:19.113 [2024-11-29 09:43:46.611158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:19.113 [2024-11-29 09:43:46.611165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.113 [2024-11-29 09:43:46.611213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:19.114 [2024-11-29 09:43:46.611223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:19.114 [2024-11-29 09:43:46.611230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:19.114 [2024-11-29 09:43:46.611238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.114 [2024-11-29 09:43:46.611251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:19.114 [2024-11-29 09:43:46.611259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:19.114 [2024-11-29 09:43:46.611270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:19.114 [2024-11-29 09:43:46.611277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.114 [2024-11-29 09:43:46.620882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:19.114 [2024-11-29 09:43:46.620923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:19.114 [2024-11-29 09:43:46.620934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:19.114 [2024-11-29 09:43:46.620941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.114 [2024-11-29 09:43:46.628403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:19.114 [2024-11-29 09:43:46.628453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:19.114 [2024-11-29 09:43:46.628463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:19.114 [2024-11-29 09:43:46.628470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.114 [2024-11-29 09:43:46.628501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:19.114 [2024-11-29 09:43:46.628510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:19.114 [2024-11-29 09:43:46.628517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:19.114 [2024-11-29 09:43:46.628525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.114 [2024-11-29 09:43:46.628569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:19.114 [2024-11-29 09:43:46.628579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:19.114 [2024-11-29 09:43:46.628609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:19.114 [2024-11-29 09:43:46.628619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.114 [2024-11-29 09:43:46.628682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:19.114 [2024-11-29 09:43:46.628695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:19.114 [2024-11-29 09:43:46.628703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:19.114 [2024-11-29 09:43:46.628710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.114 [2024-11-29 09:43:46.628742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:19.114 [2024-11-29 09:43:46.628751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:19.114 [2024-11-29 09:43:46.628759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:19.114 [2024-11-29 09:43:46.628766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.114 [2024-11-29 09:43:46.628804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:19.114 [2024-11-29 09:43:46.628814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:19.114 [2024-11-29 09:43:46.628821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:19.114 [2024-11-29 09:43:46.628829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.114 [2024-11-29 09:43:46.628867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:19.114 [2024-11-29 09:43:46.628877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:19.114 [2024-11-29 09:43:46.628884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:19.114 [2024-11-29 09:43:46.628898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.114 [2024-11-29 09:43:46.629010] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 63.063 ms, result 0 00:26:19.114 00:26:19.114 00:26:19.114 09:43:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:21.651 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:26:21.651 09:43:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:21.651 [2024-11-29 09:43:49.055562] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:26:21.651 [2024-11-29 09:43:49.055725] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94302 ] 00:26:21.651 [2024-11-29 09:43:49.191567] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:21.651 [2024-11-29 09:43:49.221415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:21.651 [2024-11-29 09:43:49.250683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:21.651 [2024-11-29 09:43:49.368535] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:21.651 [2024-11-29 09:43:49.368919] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:21.911 [2024-11-29 09:43:49.531391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.911 [2024-11-29 09:43:49.531673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:21.911 [2024-11-29 09:43:49.531865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:21.911 [2024-11-29 09:43:49.531915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.911 [2024-11-29 09:43:49.532045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.911 [2024-11-29 09:43:49.532097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:21.911 [2024-11-29 09:43:49.532132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:26:21.911 [2024-11-29 09:43:49.532171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.911 [2024-11-29 09:43:49.532402] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:21.911 [2024-11-29 09:43:49.532854] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:21.911 [2024-11-29 09:43:49.532928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.911 [2024-11-29 09:43:49.533042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:21.911 [2024-11-29 09:43:49.533086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.548 ms 00:26:21.911 [2024-11-29 09:43:49.533120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.911 [2024-11-29 09:43:49.534991] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:21.911 [2024-11-29 09:43:49.538919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.911 [2024-11-29 09:43:49.539093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:21.911 [2024-11-29 09:43:49.539189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.930 ms 00:26:21.911 [2024-11-29 09:43:49.539232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.911 [2024-11-29 09:43:49.539340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.911 [2024-11-29 09:43:49.539391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:21.911 [2024-11-29 09:43:49.539615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:26:21.911 [2024-11-29 09:43:49.539656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.911 [2024-11-29 09:43:49.547834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.911 [2024-11-29 09:43:49.547998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:21.911 [2024-11-29 09:43:49.548076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.082 ms 00:26:21.911 [2024-11-29 09:43:49.548112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.911 [2024-11-29 09:43:49.548263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.911 [2024-11-29 09:43:49.548314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:21.911 [2024-11-29 09:43:49.548350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:26:21.911 [2024-11-29 09:43:49.548382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.911 [2024-11-29 09:43:49.548496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.911 [2024-11-29 09:43:49.548622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:21.911 [2024-11-29 09:43:49.548667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:26:21.911 [2024-11-29 09:43:49.548706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.911 [2024-11-29 09:43:49.548769] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:21.911 [2024-11-29 09:43:49.550915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.911 [2024-11-29 09:43:49.551059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:21.911 [2024-11-29 09:43:49.551153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.156 ms 00:26:21.911 [2024-11-29 09:43:49.551194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.911 [2024-11-29 09:43:49.551281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.911 [2024-11-29 09:43:49.551336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:21.911 [2024-11-29 09:43:49.551380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:26:21.911 [2024-11-29 09:43:49.551418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.911 [2024-11-29 09:43:49.551476] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:21.911 [2024-11-29 09:43:49.551540] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:21.911 [2024-11-29 09:43:49.551735] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:21.911 [2024-11-29 09:43:49.551813] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:21.911 [2024-11-29 09:43:49.551995] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:21.911 [2024-11-29 09:43:49.552055] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:21.911 [2024-11-29 09:43:49.552115] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:21.911 [2024-11-29 09:43:49.552250] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:21.911 [2024-11-29 09:43:49.552315] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:21.911 [2024-11-29 09:43:49.552459] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:21.911 [2024-11-29 09:43:49.552658] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:21.911 [2024-11-29 09:43:49.552706] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:21.911 [2024-11-29 09:43:49.552744] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:21.911 [2024-11-29 09:43:49.552792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.911 [2024-11-29 09:43:49.552832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:21.911 [2024-11-29 09:43:49.552872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.313 ms 00:26:21.911 [2024-11-29 09:43:49.553000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.911 [2024-11-29 09:43:49.553185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.911 [2024-11-29 09:43:49.553235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:21.911 [2024-11-29 09:43:49.553277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:26:21.911 [2024-11-29 09:43:49.553383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.911 [2024-11-29 09:43:49.553640] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:21.911 [2024-11-29 09:43:49.553773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:21.911 [2024-11-29 09:43:49.553811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:21.911 [2024-11-29 09:43:49.553844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:21.911 [2024-11-29 09:43:49.553883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:21.911 [2024-11-29 09:43:49.553916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:21.911 [2024-11-29 09:43:49.553959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:21.911 [2024-11-29 09:43:49.553993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:21.911 [2024-11-29 09:43:49.554030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:21.911 [2024-11-29 09:43:49.554044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:21.911 [2024-11-29 09:43:49.554056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:21.911 [2024-11-29 09:43:49.554069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:21.911 [2024-11-29 09:43:49.554081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:21.911 [2024-11-29 09:43:49.554092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:21.911 [2024-11-29 09:43:49.554104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:21.911 [2024-11-29 09:43:49.554116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:21.911 [2024-11-29 09:43:49.554128] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:21.911 [2024-11-29 09:43:49.554140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:21.911 [2024-11-29 09:43:49.554151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:21.911 [2024-11-29 09:43:49.554163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:21.911 [2024-11-29 09:43:49.554174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:21.911 [2024-11-29 09:43:49.554185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:21.911 [2024-11-29 09:43:49.554197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:21.911 [2024-11-29 09:43:49.554208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:21.911 [2024-11-29 09:43:49.554226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:21.911 [2024-11-29 09:43:49.554238] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:21.911 [2024-11-29 09:43:49.554249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:21.911 [2024-11-29 09:43:49.554261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:21.911 [2024-11-29 09:43:49.554272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:21.911 [2024-11-29 09:43:49.554284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:21.911 [2024-11-29 09:43:49.554295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:21.911 [2024-11-29 09:43:49.554306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:21.911 [2024-11-29 09:43:49.554318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:21.911 [2024-11-29 09:43:49.554329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:21.911 [2024-11-29 09:43:49.554340] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:21.911 [2024-11-29 09:43:49.554351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:21.911 [2024-11-29 09:43:49.554362] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:21.911 [2024-11-29 09:43:49.554373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:21.911 [2024-11-29 09:43:49.554385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:21.911 [2024-11-29 09:43:49.554396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:21.911 [2024-11-29 09:43:49.554410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:21.911 [2024-11-29 09:43:49.554422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:21.911 [2024-11-29 09:43:49.554433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:21.911 [2024-11-29 09:43:49.554444] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:21.911 [2024-11-29 09:43:49.554458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:21.911 [2024-11-29 09:43:49.554471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:21.911 [2024-11-29 09:43:49.554483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:21.911 [2024-11-29 09:43:49.554495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:21.911 [2024-11-29 09:43:49.554507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:21.911 [2024-11-29 09:43:49.554520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:21.911 [2024-11-29 09:43:49.554531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:21.912 [2024-11-29 09:43:49.554543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:21.912 [2024-11-29 09:43:49.554555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:21.912 [2024-11-29 09:43:49.554570] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:21.912 [2024-11-29 09:43:49.554603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:21.912 [2024-11-29 09:43:49.554619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:21.912 [2024-11-29 09:43:49.554636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:21.912 [2024-11-29 09:43:49.554651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:21.912 [2024-11-29 09:43:49.554665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:21.912 [2024-11-29 09:43:49.554679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:21.912 [2024-11-29 09:43:49.554694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:21.912 [2024-11-29 09:43:49.554709] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:21.912 [2024-11-29 09:43:49.554723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:21.912 [2024-11-29 09:43:49.554737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:21.912 [2024-11-29 09:43:49.554751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:21.912 [2024-11-29 09:43:49.554764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:21.912 [2024-11-29 09:43:49.554779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:21.912 [2024-11-29 09:43:49.554794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:21.912 [2024-11-29 09:43:49.554808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:21.912 [2024-11-29 09:43:49.554824] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:21.912 [2024-11-29 09:43:49.554840] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:21.912 [2024-11-29 09:43:49.554855] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:21.912 [2024-11-29 09:43:49.554874] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:21.912 [2024-11-29 09:43:49.554889] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:21.912 [2024-11-29 09:43:49.554902] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:21.912 [2024-11-29 09:43:49.554918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.912 [2024-11-29 09:43:49.554940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:21.912 [2024-11-29 09:43:49.554962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.412 ms 00:26:21.912 [2024-11-29 09:43:49.554979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.912 [2024-11-29 09:43:49.569294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.912 [2024-11-29 09:43:49.569346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:21.912 [2024-11-29 09:43:49.569364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.228 ms 00:26:21.912 [2024-11-29 09:43:49.569377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.912 [2024-11-29 09:43:49.569493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.912 [2024-11-29 09:43:49.569525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:21.912 [2024-11-29 09:43:49.569541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:26:21.912 [2024-11-29 09:43:49.569552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.912 [2024-11-29 09:43:49.594100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.912 [2024-11-29 09:43:49.594178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:21.912 [2024-11-29 09:43:49.594209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.423 ms 00:26:21.912 [2024-11-29 09:43:49.594244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.912 [2024-11-29 09:43:49.594336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.912 [2024-11-29 09:43:49.594374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:21.912 [2024-11-29 09:43:49.594400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:21.912 [2024-11-29 09:43:49.594431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.912 [2024-11-29 09:43:49.595191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.912 [2024-11-29 09:43:49.595249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:21.912 [2024-11-29 09:43:49.595274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.633 ms 00:26:21.912 [2024-11-29 09:43:49.595292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.912 [2024-11-29 09:43:49.595583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.912 [2024-11-29 09:43:49.595635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:21.912 [2024-11-29 09:43:49.595658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.237 ms 00:26:21.912 [2024-11-29 09:43:49.595676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.912 [2024-11-29 09:43:49.604009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.912 [2024-11-29 09:43:49.604059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:21.912 [2024-11-29 09:43:49.604073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.276 ms 00:26:21.912 [2024-11-29 09:43:49.604100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.912 [2024-11-29 09:43:49.607927] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:21.912 [2024-11-29 09:43:49.607981] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:21.912 [2024-11-29 09:43:49.608004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.912 [2024-11-29 09:43:49.608017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:21.912 [2024-11-29 09:43:49.608029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.782 ms 00:26:21.912 [2024-11-29 09:43:49.608040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.912 [2024-11-29 09:43:49.623769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.912 [2024-11-29 09:43:49.623827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:21.912 [2024-11-29 09:43:49.623844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.664 ms 00:26:21.912 [2024-11-29 09:43:49.623863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.912 [2024-11-29 09:43:49.626871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.912 [2024-11-29 09:43:49.626927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:21.912 [2024-11-29 09:43:49.626941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.944 ms 00:26:21.912 [2024-11-29 09:43:49.626952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.912 [2024-11-29 09:43:49.629606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.912 [2024-11-29 09:43:49.629664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:21.912 [2024-11-29 09:43:49.629680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.574 ms 00:26:21.912 [2024-11-29 09:43:49.629691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.912 [2024-11-29 09:43:49.630148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.912 [2024-11-29 09:43:49.630189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:21.912 [2024-11-29 09:43:49.630212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.361 ms 00:26:21.912 [2024-11-29 09:43:49.630228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.173 [2024-11-29 09:43:49.653890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.173 [2024-11-29 09:43:49.653958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:22.173 [2024-11-29 09:43:49.653980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.627 ms 00:26:22.174 [2024-11-29 09:43:49.653992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.174 [2024-11-29 09:43:49.662390] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:22.174 [2024-11-29 09:43:49.665680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.174 [2024-11-29 09:43:49.665728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:22.174 [2024-11-29 09:43:49.665745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.618 ms 00:26:22.174 [2024-11-29 09:43:49.665764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.174 [2024-11-29 09:43:49.665871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.174 [2024-11-29 09:43:49.665888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:22.174 [2024-11-29 09:43:49.665903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:26:22.174 [2024-11-29 09:43:49.665915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.174 [2024-11-29 09:43:49.666865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.174 [2024-11-29 09:43:49.666920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:22.174 [2024-11-29 09:43:49.666945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.892 ms 00:26:22.174 [2024-11-29 09:43:49.666960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.174 [2024-11-29 09:43:49.667010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.174 [2024-11-29 09:43:49.667023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:22.174 [2024-11-29 09:43:49.667036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:22.174 [2024-11-29 09:43:49.667049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.174 [2024-11-29 09:43:49.667098] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:22.174 [2024-11-29 09:43:49.667118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.174 [2024-11-29 09:43:49.667135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:22.174 [2024-11-29 09:43:49.667149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:26:22.174 [2024-11-29 09:43:49.667162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.174 [2024-11-29 09:43:49.673399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.174 [2024-11-29 09:43:49.673583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:22.174 [2024-11-29 09:43:49.673623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.205 ms 00:26:22.174 [2024-11-29 09:43:49.673635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.174 [2024-11-29 09:43:49.673822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.174 [2024-11-29 09:43:49.673851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:22.174 [2024-11-29 09:43:49.673876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:26:22.174 [2024-11-29 09:43:49.673890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.174 [2024-11-29 09:43:49.675199] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 143.276 ms, result 0 00:26:23.556  [2024-11-29T09:43:51.855Z] Copying: 17/1024 [MB] (17 MBps) [2024-11-29T09:43:53.264Z] Copying: 36/1024 [MB] (18 MBps) [2024-11-29T09:43:54.206Z] Copying: 56/1024 [MB] (19 MBps) [2024-11-29T09:43:55.152Z] Copying: 72/1024 [MB] (16 MBps) [2024-11-29T09:43:56.096Z] Copying: 84596/1048576 [kB] (9884 kBps) [2024-11-29T09:43:57.043Z] Copying: 94724/1048576 [kB] (10128 kBps) [2024-11-29T09:43:58.001Z] Copying: 104524/1048576 [kB] (9800 kBps) [2024-11-29T09:43:58.946Z] Copying: 114080/1048576 [kB] (9556 kBps) [2024-11-29T09:43:59.888Z] Copying: 124028/1048576 [kB] (9948 kBps) [2024-11-29T09:44:01.273Z] Copying: 131/1024 [MB] (10 MBps) [2024-11-29T09:44:02.222Z] Copying: 144240/1048576 [kB] (9964 kBps) [2024-11-29T09:44:03.214Z] Copying: 153936/1048576 [kB] (9696 kBps) [2024-11-29T09:44:04.158Z] Copying: 169/1024 [MB] (19 MBps) [2024-11-29T09:44:05.101Z] Copying: 193/1024 [MB] (24 MBps) [2024-11-29T09:44:06.039Z] Copying: 212/1024 [MB] (18 MBps) [2024-11-29T09:44:06.983Z] Copying: 224/1024 [MB] (12 MBps) [2024-11-29T09:44:07.929Z] Copying: 241/1024 [MB] (17 MBps) [2024-11-29T09:44:08.875Z] Copying: 254/1024 [MB] (12 MBps) [2024-11-29T09:44:10.263Z] Copying: 270648/1048576 [kB] (9660 kBps) [2024-11-29T09:44:11.201Z] Copying: 280504/1048576 [kB] (9856 kBps) [2024-11-29T09:44:12.143Z] Copying: 289640/1048576 [kB] (9136 kBps) [2024-11-29T09:44:13.083Z] Copying: 298820/1048576 [kB] (9180 kBps) [2024-11-29T09:44:14.021Z] Copying: 308660/1048576 [kB] (9840 kBps) [2024-11-29T09:44:14.960Z] Copying: 318196/1048576 [kB] (9536 kBps) [2024-11-29T09:44:15.903Z] Copying: 320/1024 [MB] (10 MBps) [2024-11-29T09:44:17.292Z] Copying: 331/1024 [MB] (10 MBps) [2024-11-29T09:44:17.865Z] Copying: 349128/1048576 [kB] (9768 kBps) [2024-11-29T09:44:19.254Z] Copying: 358652/1048576 [kB] (9524 kBps) [2024-11-29T09:44:20.195Z] Copying: 368596/1048576 [kB] (9944 kBps) [2024-11-29T09:44:21.137Z] Copying: 371/1024 [MB] (11 MBps) [2024-11-29T09:44:22.082Z] Copying: 382/1024 [MB] (11 MBps) [2024-11-29T09:44:23.028Z] Copying: 397/1024 [MB] (15 MBps) [2024-11-29T09:44:24.049Z] Copying: 408/1024 [MB] (10 MBps) [2024-11-29T09:44:24.993Z] Copying: 418/1024 [MB] (10 MBps) [2024-11-29T09:44:25.936Z] Copying: 430/1024 [MB] (11 MBps) [2024-11-29T09:44:26.883Z] Copying: 440/1024 [MB] (10 MBps) [2024-11-29T09:44:28.269Z] Copying: 460776/1048576 [kB] (9576 kBps) [2024-11-29T09:44:29.215Z] Copying: 460/1024 [MB] (10 MBps) [2024-11-29T09:44:30.158Z] Copying: 470/1024 [MB] (10 MBps) [2024-11-29T09:44:31.101Z] Copying: 481/1024 [MB] (11 MBps) [2024-11-29T09:44:32.045Z] Copying: 495/1024 [MB] (13 MBps) [2024-11-29T09:44:32.989Z] Copying: 510/1024 [MB] (15 MBps) [2024-11-29T09:44:33.933Z] Copying: 521/1024 [MB] (10 MBps) [2024-11-29T09:44:34.880Z] Copying: 532/1024 [MB] (10 MBps) [2024-11-29T09:44:36.266Z] Copying: 543/1024 [MB] (10 MBps) [2024-11-29T09:44:37.211Z] Copying: 553/1024 [MB] (10 MBps) [2024-11-29T09:44:38.153Z] Copying: 564/1024 [MB] (10 MBps) [2024-11-29T09:44:39.097Z] Copying: 579/1024 [MB] (15 MBps) [2024-11-29T09:44:40.063Z] Copying: 594/1024 [MB] (14 MBps) [2024-11-29T09:44:41.006Z] Copying: 612/1024 [MB] (18 MBps) [2024-11-29T09:44:41.948Z] Copying: 629/1024 [MB] (16 MBps) [2024-11-29T09:44:42.915Z] Copying: 645/1024 [MB] (15 MBps) [2024-11-29T09:44:43.860Z] Copying: 658/1024 [MB] (13 MBps) [2024-11-29T09:44:45.249Z] Copying: 671/1024 [MB] (13 MBps) [2024-11-29T09:44:46.192Z] Copying: 687/1024 [MB] (16 MBps) [2024-11-29T09:44:47.138Z] Copying: 703/1024 [MB] (15 MBps) [2024-11-29T09:44:48.081Z] Copying: 714/1024 [MB] (11 MBps) [2024-11-29T09:44:49.026Z] Copying: 725/1024 [MB] (11 MBps) [2024-11-29T09:44:49.971Z] Copying: 737/1024 [MB] (11 MBps) [2024-11-29T09:44:50.914Z] Copying: 749/1024 [MB] (11 MBps) [2024-11-29T09:44:51.859Z] Copying: 759/1024 [MB] (10 MBps) [2024-11-29T09:44:53.243Z] Copying: 770/1024 [MB] (11 MBps) [2024-11-29T09:44:54.199Z] Copying: 781/1024 [MB] (10 MBps) [2024-11-29T09:44:55.145Z] Copying: 792/1024 [MB] (10 MBps) [2024-11-29T09:44:56.086Z] Copying: 803/1024 [MB] (10 MBps) [2024-11-29T09:44:57.029Z] Copying: 814/1024 [MB] (11 MBps) [2024-11-29T09:44:57.969Z] Copying: 825/1024 [MB] (10 MBps) [2024-11-29T09:44:58.915Z] Copying: 836/1024 [MB] (10 MBps) [2024-11-29T09:44:59.859Z] Copying: 846/1024 [MB] (10 MBps) [2024-11-29T09:45:01.245Z] Copying: 856/1024 [MB] (10 MBps) [2024-11-29T09:45:02.188Z] Copying: 868/1024 [MB] (12 MBps) [2024-11-29T09:45:03.132Z] Copying: 879/1024 [MB] (11 MBps) [2024-11-29T09:45:04.077Z] Copying: 890/1024 [MB] (10 MBps) [2024-11-29T09:45:05.022Z] Copying: 921632/1048576 [kB] (10056 kBps) [2024-11-29T09:45:05.990Z] Copying: 914/1024 [MB] (14 MBps) [2024-11-29T09:45:06.934Z] Copying: 924/1024 [MB] (10 MBps) [2024-11-29T09:45:07.878Z] Copying: 936/1024 [MB] (11 MBps) [2024-11-29T09:45:09.264Z] Copying: 949/1024 [MB] (12 MBps) [2024-11-29T09:45:10.202Z] Copying: 960/1024 [MB] (11 MBps) [2024-11-29T09:45:11.200Z] Copying: 973/1024 [MB] (13 MBps) [2024-11-29T09:45:12.141Z] Copying: 987/1024 [MB] (14 MBps) [2024-11-29T09:45:13.082Z] Copying: 1003/1024 [MB] (15 MBps) [2024-11-29T09:45:13.343Z] Copying: 1019/1024 [MB] (15 MBps) [2024-11-29T09:45:13.343Z] Copying: 1024/1024 [MB] (average 12 MBps)[2024-11-29 09:45:13.299193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.617 [2024-11-29 09:45:13.299419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:45.617 [2024-11-29 09:45:13.299448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:45.617 [2024-11-29 09:45:13.299458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.617 [2024-11-29 09:45:13.299489] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:45.617 [2024-11-29 09:45:13.299978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.617 [2024-11-29 09:45:13.299998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:45.617 [2024-11-29 09:45:13.300009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.472 ms 00:27:45.617 [2024-11-29 09:45:13.300017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.617 [2024-11-29 09:45:13.300259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.617 [2024-11-29 09:45:13.300276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:45.617 [2024-11-29 09:45:13.300289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.221 ms 00:27:45.617 [2024-11-29 09:45:13.300299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.617 [2024-11-29 09:45:13.304194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.617 [2024-11-29 09:45:13.304209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:45.617 [2024-11-29 09:45:13.304220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.874 ms 00:27:45.617 [2024-11-29 09:45:13.304229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.617 [2024-11-29 09:45:13.311545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.617 [2024-11-29 09:45:13.311571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:45.617 [2024-11-29 09:45:13.311582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.298 ms 00:27:45.617 [2024-11-29 09:45:13.311616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.617 [2024-11-29 09:45:13.313906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.617 [2024-11-29 09:45:13.314020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:45.617 [2024-11-29 09:45:13.314035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.218 ms 00:27:45.617 [2024-11-29 09:45:13.314043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.617 [2024-11-29 09:45:13.317392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.617 [2024-11-29 09:45:13.317515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:45.617 [2024-11-29 09:45:13.317531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.320 ms 00:27:45.617 [2024-11-29 09:45:13.317538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.617 [2024-11-29 09:45:13.319710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.617 [2024-11-29 09:45:13.319731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:45.617 [2024-11-29 09:45:13.319740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.131 ms 00:27:45.617 [2024-11-29 09:45:13.319753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.617 [2024-11-29 09:45:13.321738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.617 [2024-11-29 09:45:13.321838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:45.617 [2024-11-29 09:45:13.321888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.970 ms 00:27:45.617 [2024-11-29 09:45:13.321910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.617 [2024-11-29 09:45:13.323637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.617 [2024-11-29 09:45:13.323728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:45.618 [2024-11-29 09:45:13.323778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.690 ms 00:27:45.618 [2024-11-29 09:45:13.323800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.618 [2024-11-29 09:45:13.325373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.618 [2024-11-29 09:45:13.325469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:45.618 [2024-11-29 09:45:13.325516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.537 ms 00:27:45.618 [2024-11-29 09:45:13.325537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.618 [2024-11-29 09:45:13.326881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.618 [2024-11-29 09:45:13.326969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:45.618 [2024-11-29 09:45:13.327014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.253 ms 00:27:45.618 [2024-11-29 09:45:13.327035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.618 [2024-11-29 09:45:13.327086] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:45.618 [2024-11-29 09:45:13.327116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:45.618 [2024-11-29 09:45:13.327572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:27:45.618 [2024-11-29 09:45:13.327683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.327740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.327791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.327837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.327869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.327941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.327979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.328041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.328070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.328119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.328167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.328198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.328264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.328294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.328410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.328441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.328488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.328518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.328563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.328625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.328672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.328702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.328780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.328810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.328860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.328892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.328920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.328968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.329040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.329069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.329098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.329140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.329170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.329197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.329225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.329278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.329308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.329336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.329364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.329413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.329442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.329506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.329599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.329633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.329661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.329710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.329740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.329769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.329832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.329880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.329910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.329939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.329998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.330027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.330055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.330109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.330141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.330169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.330197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.330246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.330275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.330304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.330332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.330405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.330436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.330464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.330484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.330492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.330500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.330507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.330514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.330522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.330529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.330537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.330545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.330552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.330559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.330566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:45.618 [2024-11-29 09:45:13.330574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:45.619 [2024-11-29 09:45:13.330604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:45.619 [2024-11-29 09:45:13.330617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:45.619 [2024-11-29 09:45:13.330630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:45.619 [2024-11-29 09:45:13.330640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:45.619 [2024-11-29 09:45:13.330648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:45.619 [2024-11-29 09:45:13.330655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:45.619 [2024-11-29 09:45:13.330663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:45.619 [2024-11-29 09:45:13.330670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:45.619 [2024-11-29 09:45:13.330677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:45.619 [2024-11-29 09:45:13.330684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:45.619 [2024-11-29 09:45:13.330692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:45.619 [2024-11-29 09:45:13.330699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:45.619 [2024-11-29 09:45:13.330707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:45.619 [2024-11-29 09:45:13.330714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:45.619 [2024-11-29 09:45:13.330721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:45.619 [2024-11-29 09:45:13.330728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:45.619 [2024-11-29 09:45:13.330735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:45.619 [2024-11-29 09:45:13.330743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:45.619 [2024-11-29 09:45:13.330751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:45.619 [2024-11-29 09:45:13.330797] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:45.619 [2024-11-29 09:45:13.330805] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9bdcd7e5-faf1-4d3d-9a3e-4f068cf2336b 00:27:45.619 [2024-11-29 09:45:13.330813] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:27:45.619 [2024-11-29 09:45:13.330821] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:45.619 [2024-11-29 09:45:13.330827] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:45.619 [2024-11-29 09:45:13.330835] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:45.619 [2024-11-29 09:45:13.330843] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:45.619 [2024-11-29 09:45:13.330850] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:45.619 [2024-11-29 09:45:13.330863] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:45.619 [2024-11-29 09:45:13.330870] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:45.619 [2024-11-29 09:45:13.330876] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:45.619 [2024-11-29 09:45:13.330884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.619 [2024-11-29 09:45:13.330900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:45.619 [2024-11-29 09:45:13.330910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.798 ms 00:27:45.619 [2024-11-29 09:45:13.330917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.619 [2024-11-29 09:45:13.332296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.619 [2024-11-29 09:45:13.332318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:45.619 [2024-11-29 09:45:13.332327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.353 ms 00:27:45.619 [2024-11-29 09:45:13.332340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.619 [2024-11-29 09:45:13.332419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.619 [2024-11-29 09:45:13.332428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:45.619 [2024-11-29 09:45:13.332437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:27:45.619 [2024-11-29 09:45:13.332444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.619 [2024-11-29 09:45:13.337317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:45.619 [2024-11-29 09:45:13.337410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:45.619 [2024-11-29 09:45:13.337550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:45.619 [2024-11-29 09:45:13.337573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.619 [2024-11-29 09:45:13.337644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:45.619 [2024-11-29 09:45:13.337665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:45.619 [2024-11-29 09:45:13.337690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:45.619 [2024-11-29 09:45:13.337709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.619 [2024-11-29 09:45:13.337759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:45.619 [2024-11-29 09:45:13.337781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:45.619 [2024-11-29 09:45:13.337802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:45.619 [2024-11-29 09:45:13.337903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.619 [2024-11-29 09:45:13.337933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:45.619 [2024-11-29 09:45:13.337973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:45.619 [2024-11-29 09:45:13.337995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:45.619 [2024-11-29 09:45:13.338033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.879 [2024-11-29 09:45:13.346811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:45.879 [2024-11-29 09:45:13.346934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:45.879 [2024-11-29 09:45:13.346988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:45.879 [2024-11-29 09:45:13.347009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.879 [2024-11-29 09:45:13.353914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:45.879 [2024-11-29 09:45:13.354037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:45.879 [2024-11-29 09:45:13.354097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:45.879 [2024-11-29 09:45:13.354119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.879 [2024-11-29 09:45:13.354175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:45.879 [2024-11-29 09:45:13.354198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:45.879 [2024-11-29 09:45:13.354217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:45.879 [2024-11-29 09:45:13.354235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.879 [2024-11-29 09:45:13.354273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:45.879 [2024-11-29 09:45:13.354399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:45.879 [2024-11-29 09:45:13.354423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:45.879 [2024-11-29 09:45:13.354441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.879 [2024-11-29 09:45:13.354524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:45.879 [2024-11-29 09:45:13.354546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:45.879 [2024-11-29 09:45:13.354570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:45.879 [2024-11-29 09:45:13.354618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.879 [2024-11-29 09:45:13.354724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:45.879 [2024-11-29 09:45:13.354750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:45.879 [2024-11-29 09:45:13.354769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:45.879 [2024-11-29 09:45:13.354826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.879 [2024-11-29 09:45:13.354877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:45.879 [2024-11-29 09:45:13.354898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:45.879 [2024-11-29 09:45:13.354917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:45.879 [2024-11-29 09:45:13.354966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.879 [2024-11-29 09:45:13.355028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:45.879 [2024-11-29 09:45:13.355056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:45.879 [2024-11-29 09:45:13.355075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:45.879 [2024-11-29 09:45:13.355093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.879 [2024-11-29 09:45:13.355291] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 56.077 ms, result 0 00:27:46.140 00:27:46.140 00:27:46.140 09:45:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:48.688 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:27:48.688 09:45:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:27:48.688 09:45:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:27:48.688 09:45:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:48.688 09:45:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:48.688 09:45:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:27:48.688 09:45:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:48.688 09:45:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:48.688 09:45:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 92710 00:27:48.688 09:45:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 92710 ']' 00:27:48.688 09:45:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 92710 00:27:48.688 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (92710) - No such process 00:27:48.688 Process with pid 92710 is not found 00:27:48.688 09:45:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 92710 is not found' 00:27:48.688 09:45:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:27:48.688 Remove shared memory files 00:27:48.688 09:45:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:27:48.688 09:45:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:48.688 09:45:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:27:48.688 09:45:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:27:48.688 09:45:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:27:48.688 09:45:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:48.688 09:45:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:27:48.688 ************************************ 00:27:48.688 END TEST ftl_dirty_shutdown 00:27:48.688 ************************************ 00:27:48.688 00:27:48.688 real 3m55.136s 00:27:48.688 user 4m10.443s 00:27:48.688 sys 0m23.708s 00:27:48.688 09:45:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:48.688 09:45:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:48.688 09:45:16 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:27:48.688 09:45:16 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:48.688 09:45:16 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:48.688 09:45:16 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:48.688 ************************************ 00:27:48.688 START TEST ftl_upgrade_shutdown 00:27:48.688 ************************************ 00:27:48.688 09:45:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:27:48.951 * Looking for test storage... 00:27:48.951 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:48.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:48.951 --rc genhtml_branch_coverage=1 00:27:48.951 --rc genhtml_function_coverage=1 00:27:48.951 --rc genhtml_legend=1 00:27:48.951 --rc geninfo_all_blocks=1 00:27:48.951 --rc geninfo_unexecuted_blocks=1 00:27:48.951 00:27:48.951 ' 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:48.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:48.951 --rc genhtml_branch_coverage=1 00:27:48.951 --rc genhtml_function_coverage=1 00:27:48.951 --rc genhtml_legend=1 00:27:48.951 --rc geninfo_all_blocks=1 00:27:48.951 --rc geninfo_unexecuted_blocks=1 00:27:48.951 00:27:48.951 ' 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:48.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:48.951 --rc genhtml_branch_coverage=1 00:27:48.951 --rc genhtml_function_coverage=1 00:27:48.951 --rc genhtml_legend=1 00:27:48.951 --rc geninfo_all_blocks=1 00:27:48.951 --rc geninfo_unexecuted_blocks=1 00:27:48.951 00:27:48.951 ' 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:48.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:48.951 --rc genhtml_branch_coverage=1 00:27:48.951 --rc genhtml_function_coverage=1 00:27:48.951 --rc genhtml_legend=1 00:27:48.951 --rc geninfo_all_blocks=1 00:27:48.951 --rc geninfo_unexecuted_blocks=1 00:27:48.951 00:27:48.951 ' 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:27:48.951 09:45:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:27:48.952 09:45:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:27:48.952 09:45:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:27:48.952 09:45:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:27:48.952 09:45:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:27:48.952 09:45:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:48.952 09:45:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:48.952 09:45:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:48.952 09:45:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=95268 00:27:48.952 09:45:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:27:48.952 09:45:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:48.952 09:45:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 95268 00:27:48.952 09:45:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 95268 ']' 00:27:48.952 09:45:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:48.952 09:45:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:48.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:48.952 09:45:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:48.952 09:45:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:48.952 09:45:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:48.952 [2024-11-29 09:45:16.641785] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:27:48.952 [2024-11-29 09:45:16.642101] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95268 ] 00:27:49.213 [2024-11-29 09:45:16.774381] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:49.213 [2024-11-29 09:45:16.799564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:49.213 [2024-11-29 09:45:16.818608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:49.837 09:45:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:49.837 09:45:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:27:49.837 09:45:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:49.837 09:45:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:27:49.837 09:45:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:27:49.837 09:45:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:49.837 09:45:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:27:49.837 09:45:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:49.837 09:45:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:27:49.837 09:45:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:49.837 09:45:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:27:49.837 09:45:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:49.837 09:45:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:27:49.837 09:45:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:49.837 09:45:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:27:49.837 09:45:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:49.837 09:45:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:27:49.837 09:45:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:27:49.837 09:45:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:27:49.837 09:45:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:49.837 09:45:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:27:49.837 09:45:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:27:49.837 09:45:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:27:50.096 09:45:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:27:50.096 09:45:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:27:50.096 09:45:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:27:50.356 09:45:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:27:50.356 09:45:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:50.356 09:45:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:50.356 09:45:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:50.356 09:45:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:27:50.356 09:45:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:50.356 { 00:27:50.356 "name": "basen1", 00:27:50.356 "aliases": [ 00:27:50.356 "9ad316e5-af42-4115-9f4d-8dc27e18ea9a" 00:27:50.356 ], 00:27:50.356 "product_name": "NVMe disk", 00:27:50.356 "block_size": 4096, 00:27:50.356 "num_blocks": 1310720, 00:27:50.356 "uuid": "9ad316e5-af42-4115-9f4d-8dc27e18ea9a", 00:27:50.356 "numa_id": -1, 00:27:50.356 "assigned_rate_limits": { 00:27:50.356 "rw_ios_per_sec": 0, 00:27:50.356 "rw_mbytes_per_sec": 0, 00:27:50.356 "r_mbytes_per_sec": 0, 00:27:50.356 "w_mbytes_per_sec": 0 00:27:50.356 }, 00:27:50.356 "claimed": true, 00:27:50.356 "claim_type": "read_many_write_one", 00:27:50.356 "zoned": false, 00:27:50.356 "supported_io_types": { 00:27:50.356 "read": true, 00:27:50.356 "write": true, 00:27:50.356 "unmap": true, 00:27:50.356 "flush": true, 00:27:50.356 "reset": true, 00:27:50.356 "nvme_admin": true, 00:27:50.356 "nvme_io": true, 00:27:50.356 "nvme_io_md": false, 00:27:50.356 "write_zeroes": true, 00:27:50.356 "zcopy": false, 00:27:50.356 "get_zone_info": false, 00:27:50.356 "zone_management": false, 00:27:50.356 "zone_append": false, 00:27:50.356 "compare": true, 00:27:50.356 "compare_and_write": false, 00:27:50.356 "abort": true, 00:27:50.356 "seek_hole": false, 00:27:50.356 "seek_data": false, 00:27:50.356 "copy": true, 00:27:50.356 "nvme_iov_md": false 00:27:50.356 }, 00:27:50.356 "driver_specific": { 00:27:50.356 "nvme": [ 00:27:50.356 { 00:27:50.356 "pci_address": "0000:00:11.0", 00:27:50.356 "trid": { 00:27:50.356 "trtype": "PCIe", 00:27:50.356 "traddr": "0000:00:11.0" 00:27:50.356 }, 00:27:50.356 "ctrlr_data": { 00:27:50.356 "cntlid": 0, 00:27:50.356 "vendor_id": "0x1b36", 00:27:50.356 "model_number": "QEMU NVMe Ctrl", 00:27:50.356 "serial_number": "12341", 00:27:50.356 "firmware_revision": "8.0.0", 00:27:50.356 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:50.356 "oacs": { 00:27:50.356 "security": 0, 00:27:50.356 "format": 1, 00:27:50.356 "firmware": 0, 00:27:50.356 "ns_manage": 1 00:27:50.356 }, 00:27:50.356 "multi_ctrlr": false, 00:27:50.356 "ana_reporting": false 00:27:50.356 }, 00:27:50.356 "vs": { 00:27:50.356 "nvme_version": "1.4" 00:27:50.356 }, 00:27:50.356 "ns_data": { 00:27:50.356 "id": 1, 00:27:50.356 "can_share": false 00:27:50.356 } 00:27:50.356 } 00:27:50.356 ], 00:27:50.356 "mp_policy": "active_passive" 00:27:50.356 } 00:27:50.356 } 00:27:50.356 ]' 00:27:50.356 09:45:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:50.356 09:45:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:50.356 09:45:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:50.618 09:45:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:27:50.618 09:45:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:27:50.618 09:45:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:27:50.618 09:45:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:27:50.618 09:45:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:27:50.618 09:45:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:27:50.618 09:45:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:50.618 09:45:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:50.618 09:45:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=d9196392-dbc6-42b7-8b89-4f1a7d662a6d 00:27:50.618 09:45:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:27:50.618 09:45:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d9196392-dbc6-42b7-8b89-4f1a7d662a6d 00:27:50.878 09:45:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:27:51.139 09:45:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=dec2f55b-a9b7-4d46-8471-edf94b81977b 00:27:51.139 09:45:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u dec2f55b-a9b7-4d46-8471-edf94b81977b 00:27:51.400 09:45:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=705929a3-1669-4601-b0e4-8751ae6fb6f7 00:27:51.400 09:45:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 705929a3-1669-4601-b0e4-8751ae6fb6f7 ]] 00:27:51.400 09:45:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 705929a3-1669-4601-b0e4-8751ae6fb6f7 5120 00:27:51.400 09:45:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:27:51.400 09:45:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:51.400 09:45:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=705929a3-1669-4601-b0e4-8751ae6fb6f7 00:27:51.400 09:45:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:27:51.400 09:45:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 705929a3-1669-4601-b0e4-8751ae6fb6f7 00:27:51.400 09:45:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=705929a3-1669-4601-b0e4-8751ae6fb6f7 00:27:51.400 09:45:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:51.400 09:45:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:51.400 09:45:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:51.400 09:45:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 705929a3-1669-4601-b0e4-8751ae6fb6f7 00:27:51.661 09:45:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:51.661 { 00:27:51.661 "name": "705929a3-1669-4601-b0e4-8751ae6fb6f7", 00:27:51.661 "aliases": [ 00:27:51.661 "lvs/basen1p0" 00:27:51.661 ], 00:27:51.661 "product_name": "Logical Volume", 00:27:51.661 "block_size": 4096, 00:27:51.661 "num_blocks": 5242880, 00:27:51.661 "uuid": "705929a3-1669-4601-b0e4-8751ae6fb6f7", 00:27:51.661 "assigned_rate_limits": { 00:27:51.661 "rw_ios_per_sec": 0, 00:27:51.661 "rw_mbytes_per_sec": 0, 00:27:51.661 "r_mbytes_per_sec": 0, 00:27:51.661 "w_mbytes_per_sec": 0 00:27:51.661 }, 00:27:51.661 "claimed": false, 00:27:51.661 "zoned": false, 00:27:51.661 "supported_io_types": { 00:27:51.661 "read": true, 00:27:51.661 "write": true, 00:27:51.661 "unmap": true, 00:27:51.661 "flush": false, 00:27:51.661 "reset": true, 00:27:51.661 "nvme_admin": false, 00:27:51.661 "nvme_io": false, 00:27:51.661 "nvme_io_md": false, 00:27:51.661 "write_zeroes": true, 00:27:51.661 "zcopy": false, 00:27:51.661 "get_zone_info": false, 00:27:51.661 "zone_management": false, 00:27:51.661 "zone_append": false, 00:27:51.661 "compare": false, 00:27:51.661 "compare_and_write": false, 00:27:51.661 "abort": false, 00:27:51.661 "seek_hole": true, 00:27:51.661 "seek_data": true, 00:27:51.661 "copy": false, 00:27:51.661 "nvme_iov_md": false 00:27:51.661 }, 00:27:51.661 "driver_specific": { 00:27:51.661 "lvol": { 00:27:51.661 "lvol_store_uuid": "dec2f55b-a9b7-4d46-8471-edf94b81977b", 00:27:51.661 "base_bdev": "basen1", 00:27:51.661 "thin_provision": true, 00:27:51.661 "num_allocated_clusters": 0, 00:27:51.661 "snapshot": false, 00:27:51.661 "clone": false, 00:27:51.661 "esnap_clone": false 00:27:51.661 } 00:27:51.661 } 00:27:51.661 } 00:27:51.661 ]' 00:27:51.661 09:45:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:51.661 09:45:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:51.661 09:45:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:51.661 09:45:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:27:51.661 09:45:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:27:51.661 09:45:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:27:51.661 09:45:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:27:51.661 09:45:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:27:51.661 09:45:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:27:51.922 09:45:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:27:51.922 09:45:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:27:51.922 09:45:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:27:52.182 09:45:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:27:52.182 09:45:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:27:52.182 09:45:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 705929a3-1669-4601-b0e4-8751ae6fb6f7 -c cachen1p0 --l2p_dram_limit 2 00:27:52.182 [2024-11-29 09:45:19.893920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.182 [2024-11-29 09:45:19.893981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:52.182 [2024-11-29 09:45:19.894000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:52.182 [2024-11-29 09:45:19.894010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.182 [2024-11-29 09:45:19.894086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.182 [2024-11-29 09:45:19.894100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:52.182 [2024-11-29 09:45:19.894114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:27:52.183 [2024-11-29 09:45:19.894122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.183 [2024-11-29 09:45:19.894148] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:52.183 [2024-11-29 09:45:19.894494] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:52.183 [2024-11-29 09:45:19.894517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.183 [2024-11-29 09:45:19.894529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:52.183 [2024-11-29 09:45:19.894546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.374 ms 00:27:52.183 [2024-11-29 09:45:19.894554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.183 [2024-11-29 09:45:19.894766] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 3b63d20b-201e-4bb7-878a-090426208ff8 00:27:52.183 [2024-11-29 09:45:19.896450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.183 [2024-11-29 09:45:19.896648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:27:52.183 [2024-11-29 09:45:19.896669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:27:52.183 [2024-11-29 09:45:19.896679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.183 [2024-11-29 09:45:19.905424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.183 [2024-11-29 09:45:19.905503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:52.183 [2024-11-29 09:45:19.905516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.692 ms 00:27:52.183 [2024-11-29 09:45:19.905534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.183 [2024-11-29 09:45:19.905612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.183 [2024-11-29 09:45:19.905625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:52.183 [2024-11-29 09:45:19.905635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:27:52.183 [2024-11-29 09:45:19.905645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.183 [2024-11-29 09:45:19.905715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.183 [2024-11-29 09:45:19.905734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:52.183 [2024-11-29 09:45:19.905743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:27:52.183 [2024-11-29 09:45:19.905754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.183 [2024-11-29 09:45:19.905778] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:52.444 [2024-11-29 09:45:19.907958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.444 [2024-11-29 09:45:19.908112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:52.444 [2024-11-29 09:45:19.908133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.183 ms 00:27:52.444 [2024-11-29 09:45:19.908143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.444 [2024-11-29 09:45:19.908183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.444 [2024-11-29 09:45:19.908197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:52.444 [2024-11-29 09:45:19.908214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:52.444 [2024-11-29 09:45:19.908225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.444 [2024-11-29 09:45:19.908249] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:27:52.444 [2024-11-29 09:45:19.908403] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:52.444 [2024-11-29 09:45:19.908418] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:52.444 [2024-11-29 09:45:19.908430] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:52.444 [2024-11-29 09:45:19.908448] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:52.444 [2024-11-29 09:45:19.908458] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:52.444 [2024-11-29 09:45:19.908471] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:52.444 [2024-11-29 09:45:19.908479] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:52.444 [2024-11-29 09:45:19.908495] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:52.444 [2024-11-29 09:45:19.908502] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:52.444 [2024-11-29 09:45:19.908512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.445 [2024-11-29 09:45:19.908520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:52.445 [2024-11-29 09:45:19.908530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.265 ms 00:27:52.445 [2024-11-29 09:45:19.908537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.445 [2024-11-29 09:45:19.908790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.445 [2024-11-29 09:45:19.908836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:52.445 [2024-11-29 09:45:19.908863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.226 ms 00:27:52.445 [2024-11-29 09:45:19.908883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.445 [2024-11-29 09:45:19.909019] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:52.445 [2024-11-29 09:45:19.909159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:52.445 [2024-11-29 09:45:19.909191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:52.445 [2024-11-29 09:45:19.909212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:52.445 [2024-11-29 09:45:19.909235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:52.445 [2024-11-29 09:45:19.909304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:52.445 [2024-11-29 09:45:19.909318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:52.445 [2024-11-29 09:45:19.909326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:52.445 [2024-11-29 09:45:19.909334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:52.445 [2024-11-29 09:45:19.909341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:52.445 [2024-11-29 09:45:19.909350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:52.445 [2024-11-29 09:45:19.909357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:52.445 [2024-11-29 09:45:19.909368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:52.445 [2024-11-29 09:45:19.909376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:52.445 [2024-11-29 09:45:19.909385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:52.445 [2024-11-29 09:45:19.909391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:52.445 [2024-11-29 09:45:19.909400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:52.445 [2024-11-29 09:45:19.909406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:52.445 [2024-11-29 09:45:19.909415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:52.445 [2024-11-29 09:45:19.909422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:52.445 [2024-11-29 09:45:19.909430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:52.445 [2024-11-29 09:45:19.909437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:52.445 [2024-11-29 09:45:19.909458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:52.445 [2024-11-29 09:45:19.909464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:52.445 [2024-11-29 09:45:19.909473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:52.445 [2024-11-29 09:45:19.909480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:52.445 [2024-11-29 09:45:19.909490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:52.445 [2024-11-29 09:45:19.909497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:52.445 [2024-11-29 09:45:19.909508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:52.445 [2024-11-29 09:45:19.909518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:52.445 [2024-11-29 09:45:19.909529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:52.445 [2024-11-29 09:45:19.909536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:52.445 [2024-11-29 09:45:19.909546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:52.445 [2024-11-29 09:45:19.909553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:52.445 [2024-11-29 09:45:19.909561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:52.445 [2024-11-29 09:45:19.909568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:52.445 [2024-11-29 09:45:19.909577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:52.445 [2024-11-29 09:45:19.909609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:52.445 [2024-11-29 09:45:19.909619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:52.445 [2024-11-29 09:45:19.909626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:52.445 [2024-11-29 09:45:19.909635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:52.445 [2024-11-29 09:45:19.909643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:52.445 [2024-11-29 09:45:19.909653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:52.445 [2024-11-29 09:45:19.909659] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:52.445 [2024-11-29 09:45:19.909672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:52.445 [2024-11-29 09:45:19.909680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:52.445 [2024-11-29 09:45:19.909690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:52.445 [2024-11-29 09:45:19.909700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:52.445 [2024-11-29 09:45:19.909709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:52.445 [2024-11-29 09:45:19.909716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:52.445 [2024-11-29 09:45:19.909725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:52.445 [2024-11-29 09:45:19.909731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:52.445 [2024-11-29 09:45:19.909740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:52.445 [2024-11-29 09:45:19.909752] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:52.445 [2024-11-29 09:45:19.909767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:52.445 [2024-11-29 09:45:19.909776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:52.445 [2024-11-29 09:45:19.909785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:52.445 [2024-11-29 09:45:19.909792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:52.445 [2024-11-29 09:45:19.909801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:52.445 [2024-11-29 09:45:19.909808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:52.445 [2024-11-29 09:45:19.909820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:52.445 [2024-11-29 09:45:19.909828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:52.445 [2024-11-29 09:45:19.909838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:52.445 [2024-11-29 09:45:19.909845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:52.445 [2024-11-29 09:45:19.909855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:52.445 [2024-11-29 09:45:19.909863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:52.445 [2024-11-29 09:45:19.909872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:52.445 [2024-11-29 09:45:19.909879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:52.445 [2024-11-29 09:45:19.909889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:52.445 [2024-11-29 09:45:19.909896] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:52.445 [2024-11-29 09:45:19.909906] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:52.445 [2024-11-29 09:45:19.909916] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:52.445 [2024-11-29 09:45:19.909925] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:52.445 [2024-11-29 09:45:19.909932] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:52.445 [2024-11-29 09:45:19.909941] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:52.445 [2024-11-29 09:45:19.909950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.445 [2024-11-29 09:45:19.909963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:52.445 [2024-11-29 09:45:19.909972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.014 ms 00:27:52.445 [2024-11-29 09:45:19.909982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.445 [2024-11-29 09:45:19.910056] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:52.445 [2024-11-29 09:45:19.910070] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:56.652 [2024-11-29 09:45:24.182287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.652 [2024-11-29 09:45:24.182568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:56.652 [2024-11-29 09:45:24.182669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4272.214 ms 00:27:56.652 [2024-11-29 09:45:24.182708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.652 [2024-11-29 09:45:24.196189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.652 [2024-11-29 09:45:24.196375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:56.652 [2024-11-29 09:45:24.196444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.355 ms 00:27:56.652 [2024-11-29 09:45:24.196480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.652 [2024-11-29 09:45:24.196540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.652 [2024-11-29 09:45:24.196565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:56.652 [2024-11-29 09:45:24.196602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:27:56.652 [2024-11-29 09:45:24.196673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.652 [2024-11-29 09:45:24.209288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.652 [2024-11-29 09:45:24.209474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:56.652 [2024-11-29 09:45:24.209736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.507 ms 00:27:56.652 [2024-11-29 09:45:24.209766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.652 [2024-11-29 09:45:24.209814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.652 [2024-11-29 09:45:24.209839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:56.652 [2024-11-29 09:45:24.209860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:56.652 [2024-11-29 09:45:24.209924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.652 [2024-11-29 09:45:24.210482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.652 [2024-11-29 09:45:24.210646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:56.652 [2024-11-29 09:45:24.210706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.482 ms 00:27:56.652 [2024-11-29 09:45:24.210739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.652 [2024-11-29 09:45:24.210798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.652 [2024-11-29 09:45:24.210828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:56.652 [2024-11-29 09:45:24.210848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:27:56.652 [2024-11-29 09:45:24.210869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.652 [2024-11-29 09:45:24.219153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.652 [2024-11-29 09:45:24.219301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:56.652 [2024-11-29 09:45:24.219356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.252 ms 00:27:56.652 [2024-11-29 09:45:24.219384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.652 [2024-11-29 09:45:24.238637] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:56.652 [2024-11-29 09:45:24.240154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.652 [2024-11-29 09:45:24.240312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:56.652 [2024-11-29 09:45:24.240382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.680 ms 00:27:56.652 [2024-11-29 09:45:24.240416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.652 [2024-11-29 09:45:24.258512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.652 [2024-11-29 09:45:24.258655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:27:56.652 [2024-11-29 09:45:24.258715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.033 ms 00:27:56.652 [2024-11-29 09:45:24.258741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.652 [2024-11-29 09:45:24.258834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.652 [2024-11-29 09:45:24.258860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:56.652 [2024-11-29 09:45:24.258883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:27:56.652 [2024-11-29 09:45:24.258901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.652 [2024-11-29 09:45:24.262877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.652 [2024-11-29 09:45:24.262991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:27:56.652 [2024-11-29 09:45:24.263042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.940 ms 00:27:56.652 [2024-11-29 09:45:24.263065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.652 [2024-11-29 09:45:24.267019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.652 [2024-11-29 09:45:24.267130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:27:56.652 [2024-11-29 09:45:24.267147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.858 ms 00:27:56.652 [2024-11-29 09:45:24.267155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.652 [2024-11-29 09:45:24.267452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.652 [2024-11-29 09:45:24.267462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:56.652 [2024-11-29 09:45:24.267475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.265 ms 00:27:56.652 [2024-11-29 09:45:24.267482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.652 [2024-11-29 09:45:24.301329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.652 [2024-11-29 09:45:24.301368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:27:56.652 [2024-11-29 09:45:24.301387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.818 ms 00:27:56.652 [2024-11-29 09:45:24.301394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.652 [2024-11-29 09:45:24.306279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.652 [2024-11-29 09:45:24.306312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:27:56.652 [2024-11-29 09:45:24.306324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.828 ms 00:27:56.652 [2024-11-29 09:45:24.306331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.652 [2024-11-29 09:45:24.310538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.652 [2024-11-29 09:45:24.310569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:27:56.652 [2024-11-29 09:45:24.310581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.168 ms 00:27:56.652 [2024-11-29 09:45:24.310602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.652 [2024-11-29 09:45:24.315329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.652 [2024-11-29 09:45:24.315362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:56.653 [2024-11-29 09:45:24.315375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.690 ms 00:27:56.653 [2024-11-29 09:45:24.315382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.653 [2024-11-29 09:45:24.315472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.653 [2024-11-29 09:45:24.315481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:56.653 [2024-11-29 09:45:24.315492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:56.653 [2024-11-29 09:45:24.315499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.653 [2024-11-29 09:45:24.315570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.653 [2024-11-29 09:45:24.315578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:56.653 [2024-11-29 09:45:24.315608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:27:56.653 [2024-11-29 09:45:24.315615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.653 [2024-11-29 09:45:24.316452] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4422.157 ms, result 0 00:27:56.653 { 00:27:56.653 "name": "ftl", 00:27:56.653 "uuid": "3b63d20b-201e-4bb7-878a-090426208ff8" 00:27:56.653 } 00:27:56.653 09:45:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:27:56.914 [2024-11-29 09:45:24.524270] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:56.915 09:45:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:27:57.176 09:45:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:27:57.437 [2024-11-29 09:45:24.932721] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:57.437 09:45:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:27:57.437 [2024-11-29 09:45:25.137102] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:57.437 09:45:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:27:58.005 09:45:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:27:58.005 Fill FTL, iteration 1 00:27:58.005 09:45:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:27:58.005 09:45:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:27:58.005 09:45:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:27:58.005 09:45:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:27:58.005 09:45:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:27:58.005 09:45:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:27:58.005 09:45:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:27:58.005 09:45:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:27:58.005 09:45:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:58.005 09:45:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:27:58.005 09:45:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:58.005 09:45:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:58.005 09:45:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:58.005 09:45:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:58.005 09:45:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:27:58.005 09:45:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=95391 00:27:58.005 09:45:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:27:58.005 09:45:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:27:58.005 09:45:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 95391 /var/tmp/spdk.tgt.sock 00:27:58.005 09:45:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 95391 ']' 00:27:58.005 09:45:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:27:58.005 09:45:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:58.005 09:45:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:27:58.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:27:58.005 09:45:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:58.005 09:45:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:58.005 [2024-11-29 09:45:25.557409] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:27:58.005 [2024-11-29 09:45:25.557682] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95391 ] 00:27:58.005 [2024-11-29 09:45:25.689347] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:58.005 [2024-11-29 09:45:25.719886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:58.263 [2024-11-29 09:45:25.738029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:58.920 09:45:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:58.920 09:45:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:27:58.920 09:45:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:27:58.920 ftln1 00:27:59.178 09:45:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:27:59.178 09:45:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:27:59.178 09:45:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:27:59.178 09:45:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 95391 00:27:59.178 09:45:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 95391 ']' 00:27:59.178 09:45:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 95391 00:27:59.178 09:45:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:27:59.178 09:45:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:59.178 09:45:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95391 00:27:59.178 killing process with pid 95391 00:27:59.178 09:45:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:59.178 09:45:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:59.178 09:45:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95391' 00:27:59.178 09:45:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 95391 00:27:59.178 09:45:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 95391 00:27:59.436 09:45:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:27:59.436 09:45:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:59.694 [2024-11-29 09:45:27.189432] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:27:59.694 [2024-11-29 09:45:27.189553] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95427 ] 00:27:59.694 [2024-11-29 09:45:27.320464] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:59.694 [2024-11-29 09:45:27.352314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.694 [2024-11-29 09:45:27.370543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:01.096  [2024-11-29T09:45:29.763Z] Copying: 186/1024 [MB] (186 MBps) [2024-11-29T09:45:30.706Z] Copying: 352/1024 [MB] (166 MBps) [2024-11-29T09:45:31.651Z] Copying: 530/1024 [MB] (178 MBps) [2024-11-29T09:45:32.593Z] Copying: 694/1024 [MB] (164 MBps) [2024-11-29T09:45:33.159Z] Copying: 886/1024 [MB] (192 MBps) [2024-11-29T09:45:33.417Z] Copying: 1024/1024 [MB] (average 186 MBps) 00:28:05.691 00:28:05.691 09:45:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:28:05.691 Calculate MD5 checksum, iteration 1 00:28:05.691 09:45:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:28:05.691 09:45:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:05.691 09:45:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:05.691 09:45:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:05.691 09:45:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:05.691 09:45:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:05.691 09:45:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:05.691 [2024-11-29 09:45:33.272296] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:28:05.691 [2024-11-29 09:45:33.272419] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95491 ] 00:28:05.691 [2024-11-29 09:45:33.404769] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:05.949 [2024-11-29 09:45:33.431622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.949 [2024-11-29 09:45:33.448775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:07.323  [2024-11-29T09:45:35.308Z] Copying: 700/1024 [MB] (700 MBps) [2024-11-29T09:45:35.308Z] Copying: 1024/1024 [MB] (average 687 MBps) 00:28:07.582 00:28:07.582 09:45:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:28:07.582 09:45:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:10.127 09:45:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:28:10.127 Fill FTL, iteration 2 00:28:10.127 09:45:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=ad2d4ba5282424c7a547918e49d89b89 00:28:10.127 09:45:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:28:10.127 09:45:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:10.127 09:45:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:28:10.127 09:45:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:28:10.127 09:45:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:10.127 09:45:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:10.127 09:45:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:10.127 09:45:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:10.127 09:45:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:28:10.127 [2024-11-29 09:45:37.586927] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:28:10.127 [2024-11-29 09:45:37.587312] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95540 ] 00:28:10.127 [2024-11-29 09:45:37.724477] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:10.127 [2024-11-29 09:45:37.751264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:10.127 [2024-11-29 09:45:37.782692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:11.515  [2024-11-29T09:45:40.186Z] Copying: 174/1024 [MB] (174 MBps) [2024-11-29T09:45:41.129Z] Copying: 358/1024 [MB] (184 MBps) [2024-11-29T09:45:42.071Z] Copying: 551/1024 [MB] (193 MBps) [2024-11-29T09:45:43.030Z] Copying: 733/1024 [MB] (182 MBps) [2024-11-29T09:45:43.612Z] Copying: 916/1024 [MB] (183 MBps) [2024-11-29T09:45:43.871Z] Copying: 1024/1024 [MB] (average 183 MBps) 00:28:16.145 00:28:16.145 Calculate MD5 checksum, iteration 2 00:28:16.145 09:45:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:28:16.145 09:45:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:28:16.145 09:45:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:16.145 09:45:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:16.145 09:45:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:16.145 09:45:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:16.145 09:45:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:16.145 09:45:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:16.145 [2024-11-29 09:45:43.859439] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:28:16.145 [2024-11-29 09:45:43.859554] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95604 ] 00:28:16.405 [2024-11-29 09:45:43.987934] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:16.405 [2024-11-29 09:45:44.019597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:16.405 [2024-11-29 09:45:44.045278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:17.792  [2024-11-29T09:45:46.461Z] Copying: 599/1024 [MB] (599 MBps) [2024-11-29T09:45:47.035Z] Copying: 1024/1024 [MB] (average 562 MBps) 00:28:19.309 00:28:19.309 09:45:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:28:19.309 09:45:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:21.859 09:45:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:28:21.860 09:45:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=bb87a9e0a23c451959e361bd71eddde6 00:28:21.860 09:45:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:28:21.860 09:45:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:21.860 09:45:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:21.860 [2024-11-29 09:45:49.407766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.860 [2024-11-29 09:45:49.408023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:21.860 [2024-11-29 09:45:49.408049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:28:21.860 [2024-11-29 09:45:49.408067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.860 [2024-11-29 09:45:49.408108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.860 [2024-11-29 09:45:49.408119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:21.860 [2024-11-29 09:45:49.408129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:21.860 [2024-11-29 09:45:49.408137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.860 [2024-11-29 09:45:49.408159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.860 [2024-11-29 09:45:49.408167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:21.860 [2024-11-29 09:45:49.408181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:21.860 [2024-11-29 09:45:49.408189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.860 [2024-11-29 09:45:49.408266] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.489 ms, result 0 00:28:21.860 true 00:28:21.860 09:45:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:22.120 { 00:28:22.120 "name": "ftl", 00:28:22.120 "properties": [ 00:28:22.120 { 00:28:22.120 "name": "superblock_version", 00:28:22.120 "value": 5, 00:28:22.120 "read-only": true 00:28:22.120 }, 00:28:22.120 { 00:28:22.120 "name": "base_device", 00:28:22.120 "bands": [ 00:28:22.120 { 00:28:22.120 "id": 0, 00:28:22.120 "state": "FREE", 00:28:22.120 "validity": 0.0 00:28:22.120 }, 00:28:22.120 { 00:28:22.120 "id": 1, 00:28:22.120 "state": "FREE", 00:28:22.120 "validity": 0.0 00:28:22.120 }, 00:28:22.120 { 00:28:22.120 "id": 2, 00:28:22.120 "state": "FREE", 00:28:22.120 "validity": 0.0 00:28:22.120 }, 00:28:22.120 { 00:28:22.120 "id": 3, 00:28:22.120 "state": "FREE", 00:28:22.120 "validity": 0.0 00:28:22.120 }, 00:28:22.120 { 00:28:22.120 "id": 4, 00:28:22.120 "state": "FREE", 00:28:22.120 "validity": 0.0 00:28:22.120 }, 00:28:22.121 { 00:28:22.121 "id": 5, 00:28:22.121 "state": "FREE", 00:28:22.121 "validity": 0.0 00:28:22.121 }, 00:28:22.121 { 00:28:22.121 "id": 6, 00:28:22.121 "state": "FREE", 00:28:22.121 "validity": 0.0 00:28:22.121 }, 00:28:22.121 { 00:28:22.121 "id": 7, 00:28:22.121 "state": "FREE", 00:28:22.121 "validity": 0.0 00:28:22.121 }, 00:28:22.121 { 00:28:22.121 "id": 8, 00:28:22.121 "state": "FREE", 00:28:22.121 "validity": 0.0 00:28:22.121 }, 00:28:22.121 { 00:28:22.121 "id": 9, 00:28:22.121 "state": "FREE", 00:28:22.121 "validity": 0.0 00:28:22.121 }, 00:28:22.121 { 00:28:22.121 "id": 10, 00:28:22.121 "state": "FREE", 00:28:22.121 "validity": 0.0 00:28:22.121 }, 00:28:22.121 { 00:28:22.121 "id": 11, 00:28:22.121 "state": "FREE", 00:28:22.121 "validity": 0.0 00:28:22.121 }, 00:28:22.121 { 00:28:22.121 "id": 12, 00:28:22.121 "state": "FREE", 00:28:22.121 "validity": 0.0 00:28:22.121 }, 00:28:22.121 { 00:28:22.121 "id": 13, 00:28:22.121 "state": "FREE", 00:28:22.121 "validity": 0.0 00:28:22.121 }, 00:28:22.121 { 00:28:22.121 "id": 14, 00:28:22.121 "state": "FREE", 00:28:22.121 "validity": 0.0 00:28:22.121 }, 00:28:22.121 { 00:28:22.121 "id": 15, 00:28:22.121 "state": "FREE", 00:28:22.121 "validity": 0.0 00:28:22.121 }, 00:28:22.121 { 00:28:22.121 "id": 16, 00:28:22.121 "state": "FREE", 00:28:22.121 "validity": 0.0 00:28:22.121 }, 00:28:22.121 { 00:28:22.121 "id": 17, 00:28:22.121 "state": "FREE", 00:28:22.121 "validity": 0.0 00:28:22.121 } 00:28:22.121 ], 00:28:22.121 "read-only": true 00:28:22.121 }, 00:28:22.121 { 00:28:22.121 "name": "cache_device", 00:28:22.121 "type": "bdev", 00:28:22.121 "chunks": [ 00:28:22.121 { 00:28:22.121 "id": 0, 00:28:22.121 "state": "INACTIVE", 00:28:22.121 "utilization": 0.0 00:28:22.121 }, 00:28:22.121 { 00:28:22.121 "id": 1, 00:28:22.121 "state": "CLOSED", 00:28:22.121 "utilization": 1.0 00:28:22.121 }, 00:28:22.121 { 00:28:22.121 "id": 2, 00:28:22.121 "state": "CLOSED", 00:28:22.121 "utilization": 1.0 00:28:22.121 }, 00:28:22.121 { 00:28:22.121 "id": 3, 00:28:22.121 "state": "OPEN", 00:28:22.121 "utilization": 0.001953125 00:28:22.121 }, 00:28:22.121 { 00:28:22.121 "id": 4, 00:28:22.121 "state": "OPEN", 00:28:22.121 "utilization": 0.0 00:28:22.121 } 00:28:22.121 ], 00:28:22.121 "read-only": true 00:28:22.121 }, 00:28:22.121 { 00:28:22.121 "name": "verbose_mode", 00:28:22.121 "value": true, 00:28:22.121 "unit": "", 00:28:22.121 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:22.121 }, 00:28:22.121 { 00:28:22.121 "name": "prep_upgrade_on_shutdown", 00:28:22.121 "value": false, 00:28:22.121 "unit": "", 00:28:22.121 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:22.121 } 00:28:22.121 ] 00:28:22.121 } 00:28:22.121 09:45:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:28:22.121 [2024-11-29 09:45:49.828203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.121 [2024-11-29 09:45:49.828274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:22.121 [2024-11-29 09:45:49.828290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:22.121 [2024-11-29 09:45:49.828300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.121 [2024-11-29 09:45:49.828327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.121 [2024-11-29 09:45:49.828337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:22.121 [2024-11-29 09:45:49.828346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:22.121 [2024-11-29 09:45:49.828355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.121 [2024-11-29 09:45:49.828377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.121 [2024-11-29 09:45:49.828387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:22.121 [2024-11-29 09:45:49.828396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:22.121 [2024-11-29 09:45:49.828405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.121 [2024-11-29 09:45:49.828470] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.264 ms, result 0 00:28:22.121 true 00:28:22.382 09:45:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:28:22.382 09:45:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:28:22.382 09:45:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:22.643 09:45:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:28:22.643 09:45:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:28:22.643 09:45:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:22.643 [2024-11-29 09:45:50.324787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.643 [2024-11-29 09:45:50.325090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:22.643 [2024-11-29 09:45:50.325180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:28:22.643 [2024-11-29 09:45:50.325209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.643 [2024-11-29 09:45:50.325267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.643 [2024-11-29 09:45:50.325292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:22.643 [2024-11-29 09:45:50.325317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:22.643 [2024-11-29 09:45:50.325339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.644 [2024-11-29 09:45:50.325376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.644 [2024-11-29 09:45:50.325400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:22.644 [2024-11-29 09:45:50.325449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:22.644 [2024-11-29 09:45:50.325519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.644 [2024-11-29 09:45:50.325641] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.813 ms, result 0 00:28:22.644 true 00:28:22.644 09:45:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:22.905 { 00:28:22.905 "name": "ftl", 00:28:22.905 "properties": [ 00:28:22.905 { 00:28:22.905 "name": "superblock_version", 00:28:22.905 "value": 5, 00:28:22.905 "read-only": true 00:28:22.905 }, 00:28:22.905 { 00:28:22.905 "name": "base_device", 00:28:22.905 "bands": [ 00:28:22.905 { 00:28:22.905 "id": 0, 00:28:22.905 "state": "FREE", 00:28:22.905 "validity": 0.0 00:28:22.905 }, 00:28:22.905 { 00:28:22.905 "id": 1, 00:28:22.905 "state": "FREE", 00:28:22.905 "validity": 0.0 00:28:22.905 }, 00:28:22.905 { 00:28:22.905 "id": 2, 00:28:22.905 "state": "FREE", 00:28:22.905 "validity": 0.0 00:28:22.905 }, 00:28:22.905 { 00:28:22.905 "id": 3, 00:28:22.905 "state": "FREE", 00:28:22.905 "validity": 0.0 00:28:22.905 }, 00:28:22.905 { 00:28:22.905 "id": 4, 00:28:22.905 "state": "FREE", 00:28:22.905 "validity": 0.0 00:28:22.905 }, 00:28:22.905 { 00:28:22.905 "id": 5, 00:28:22.905 "state": "FREE", 00:28:22.905 "validity": 0.0 00:28:22.905 }, 00:28:22.905 { 00:28:22.905 "id": 6, 00:28:22.905 "state": "FREE", 00:28:22.905 "validity": 0.0 00:28:22.905 }, 00:28:22.905 { 00:28:22.905 "id": 7, 00:28:22.905 "state": "FREE", 00:28:22.905 "validity": 0.0 00:28:22.905 }, 00:28:22.905 { 00:28:22.905 "id": 8, 00:28:22.905 "state": "FREE", 00:28:22.905 "validity": 0.0 00:28:22.905 }, 00:28:22.905 { 00:28:22.905 "id": 9, 00:28:22.905 "state": "FREE", 00:28:22.905 "validity": 0.0 00:28:22.905 }, 00:28:22.905 { 00:28:22.905 "id": 10, 00:28:22.905 "state": "FREE", 00:28:22.905 "validity": 0.0 00:28:22.905 }, 00:28:22.905 { 00:28:22.905 "id": 11, 00:28:22.905 "state": "FREE", 00:28:22.905 "validity": 0.0 00:28:22.905 }, 00:28:22.905 { 00:28:22.905 "id": 12, 00:28:22.905 "state": "FREE", 00:28:22.905 "validity": 0.0 00:28:22.905 }, 00:28:22.905 { 00:28:22.905 "id": 13, 00:28:22.905 "state": "FREE", 00:28:22.905 "validity": 0.0 00:28:22.905 }, 00:28:22.906 { 00:28:22.906 "id": 14, 00:28:22.906 "state": "FREE", 00:28:22.906 "validity": 0.0 00:28:22.906 }, 00:28:22.906 { 00:28:22.906 "id": 15, 00:28:22.906 "state": "FREE", 00:28:22.906 "validity": 0.0 00:28:22.906 }, 00:28:22.906 { 00:28:22.906 "id": 16, 00:28:22.906 "state": "FREE", 00:28:22.906 "validity": 0.0 00:28:22.906 }, 00:28:22.906 { 00:28:22.906 "id": 17, 00:28:22.906 "state": "FREE", 00:28:22.906 "validity": 0.0 00:28:22.906 } 00:28:22.906 ], 00:28:22.906 "read-only": true 00:28:22.906 }, 00:28:22.906 { 00:28:22.906 "name": "cache_device", 00:28:22.906 "type": "bdev", 00:28:22.906 "chunks": [ 00:28:22.906 { 00:28:22.906 "id": 0, 00:28:22.906 "state": "INACTIVE", 00:28:22.906 "utilization": 0.0 00:28:22.906 }, 00:28:22.906 { 00:28:22.906 "id": 1, 00:28:22.906 "state": "CLOSED", 00:28:22.906 "utilization": 1.0 00:28:22.906 }, 00:28:22.906 { 00:28:22.906 "id": 2, 00:28:22.906 "state": "CLOSED", 00:28:22.906 "utilization": 1.0 00:28:22.906 }, 00:28:22.906 { 00:28:22.906 "id": 3, 00:28:22.906 "state": "OPEN", 00:28:22.906 "utilization": 0.001953125 00:28:22.906 }, 00:28:22.906 { 00:28:22.906 "id": 4, 00:28:22.906 "state": "OPEN", 00:28:22.906 "utilization": 0.0 00:28:22.906 } 00:28:22.906 ], 00:28:22.906 "read-only": true 00:28:22.906 }, 00:28:22.906 { 00:28:22.906 "name": "verbose_mode", 00:28:22.906 "value": true, 00:28:22.906 "unit": "", 00:28:22.906 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:22.906 }, 00:28:22.906 { 00:28:22.906 "name": "prep_upgrade_on_shutdown", 00:28:22.906 "value": true, 00:28:22.906 "unit": "", 00:28:22.906 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:22.906 } 00:28:22.906 ] 00:28:22.906 } 00:28:22.906 09:45:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:28:22.906 09:45:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 95268 ]] 00:28:22.906 09:45:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 95268 00:28:22.906 09:45:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 95268 ']' 00:28:22.906 09:45:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 95268 00:28:22.906 09:45:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:28:22.906 09:45:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:22.906 09:45:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95268 00:28:22.906 killing process with pid 95268 00:28:22.906 09:45:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:22.906 09:45:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:22.906 09:45:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95268' 00:28:22.906 09:45:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 95268 00:28:22.906 09:45:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 95268 00:28:23.168 [2024-11-29 09:45:50.748383] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:28:23.168 [2024-11-29 09:45:50.755022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.168 [2024-11-29 09:45:50.755080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:23.168 [2024-11-29 09:45:50.755095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:23.168 [2024-11-29 09:45:50.755105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.168 [2024-11-29 09:45:50.755131] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:23.168 [2024-11-29 09:45:50.755774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.168 [2024-11-29 09:45:50.755809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:23.168 [2024-11-29 09:45:50.755820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.629 ms 00:28:23.168 [2024-11-29 09:45:50.755827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.183 [2024-11-29 09:46:00.416201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.183 [2024-11-29 09:46:00.416536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:33.183 [2024-11-29 09:46:00.416566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9660.300 ms 00:28:33.183 [2024-11-29 09:46:00.416576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.183 [2024-11-29 09:46:00.418342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.183 [2024-11-29 09:46:00.418386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:33.183 [2024-11-29 09:46:00.418399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.721 ms 00:28:33.183 [2024-11-29 09:46:00.418408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.183 [2024-11-29 09:46:00.419600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.183 [2024-11-29 09:46:00.419627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:28:33.183 [2024-11-29 09:46:00.419638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.146 ms 00:28:33.183 [2024-11-29 09:46:00.419648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.183 [2024-11-29 09:46:00.422905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.183 [2024-11-29 09:46:00.423105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:33.183 [2024-11-29 09:46:00.423126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.199 ms 00:28:33.183 [2024-11-29 09:46:00.423135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.183 [2024-11-29 09:46:00.426804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.183 [2024-11-29 09:46:00.426861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:33.183 [2024-11-29 09:46:00.426874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.563 ms 00:28:33.183 [2024-11-29 09:46:00.426892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.183 [2024-11-29 09:46:00.427004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.183 [2024-11-29 09:46:00.427016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:33.183 [2024-11-29 09:46:00.427026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.062 ms 00:28:33.183 [2024-11-29 09:46:00.427035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.183 [2024-11-29 09:46:00.429865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.183 [2024-11-29 09:46:00.430084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:28:33.183 [2024-11-29 09:46:00.430104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.812 ms 00:28:33.183 [2024-11-29 09:46:00.430111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.183 [2024-11-29 09:46:00.433010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.183 [2024-11-29 09:46:00.433065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:28:33.183 [2024-11-29 09:46:00.433076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.798 ms 00:28:33.183 [2024-11-29 09:46:00.433085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.183 [2024-11-29 09:46:00.435523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.183 [2024-11-29 09:46:00.435577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:33.183 [2024-11-29 09:46:00.435610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.391 ms 00:28:33.183 [2024-11-29 09:46:00.435619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.183 [2024-11-29 09:46:00.438122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.183 [2024-11-29 09:46:00.438177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:33.183 [2024-11-29 09:46:00.438186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.418 ms 00:28:33.183 [2024-11-29 09:46:00.438194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.183 [2024-11-29 09:46:00.438237] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:33.183 [2024-11-29 09:46:00.438252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:33.183 [2024-11-29 09:46:00.438263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:33.183 [2024-11-29 09:46:00.438272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:33.183 [2024-11-29 09:46:00.438280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:33.183 [2024-11-29 09:46:00.438289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:33.183 [2024-11-29 09:46:00.438298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:33.183 [2024-11-29 09:46:00.438306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:33.183 [2024-11-29 09:46:00.438314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:33.183 [2024-11-29 09:46:00.438321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:33.183 [2024-11-29 09:46:00.438330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:33.183 [2024-11-29 09:46:00.438338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:33.183 [2024-11-29 09:46:00.438345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:33.183 [2024-11-29 09:46:00.438352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:33.183 [2024-11-29 09:46:00.438360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:33.183 [2024-11-29 09:46:00.438367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:33.183 [2024-11-29 09:46:00.438374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:33.183 [2024-11-29 09:46:00.438382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:33.183 [2024-11-29 09:46:00.438389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:33.183 [2024-11-29 09:46:00.438399] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:33.183 [2024-11-29 09:46:00.438408] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 3b63d20b-201e-4bb7-878a-090426208ff8 00:28:33.183 [2024-11-29 09:46:00.438416] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:33.183 [2024-11-29 09:46:00.438433] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:28:33.183 [2024-11-29 09:46:00.438440] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:28:33.183 [2024-11-29 09:46:00.438450] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:28:33.183 [2024-11-29 09:46:00.438457] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:33.183 [2024-11-29 09:46:00.438466] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:33.183 [2024-11-29 09:46:00.438474] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:33.183 [2024-11-29 09:46:00.438481] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:33.183 [2024-11-29 09:46:00.438489] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:33.183 [2024-11-29 09:46:00.438499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.183 [2024-11-29 09:46:00.438507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:33.183 [2024-11-29 09:46:00.438517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.263 ms 00:28:33.183 [2024-11-29 09:46:00.438525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.183 [2024-11-29 09:46:00.440993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.183 [2024-11-29 09:46:00.441043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:33.183 [2024-11-29 09:46:00.441055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.451 ms 00:28:33.183 [2024-11-29 09:46:00.441064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.183 [2024-11-29 09:46:00.441200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.183 [2024-11-29 09:46:00.441209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:33.183 [2024-11-29 09:46:00.441218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.098 ms 00:28:33.183 [2024-11-29 09:46:00.441226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.184 [2024-11-29 09:46:00.450215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:33.184 [2024-11-29 09:46:00.450274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:33.184 [2024-11-29 09:46:00.450288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:33.184 [2024-11-29 09:46:00.450296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.184 [2024-11-29 09:46:00.450334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:33.184 [2024-11-29 09:46:00.450343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:33.184 [2024-11-29 09:46:00.450353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:33.184 [2024-11-29 09:46:00.450361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.184 [2024-11-29 09:46:00.450452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:33.184 [2024-11-29 09:46:00.450463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:33.184 [2024-11-29 09:46:00.450471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:33.184 [2024-11-29 09:46:00.450480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.184 [2024-11-29 09:46:00.450498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:33.184 [2024-11-29 09:46:00.450511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:33.184 [2024-11-29 09:46:00.450519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:33.184 [2024-11-29 09:46:00.450526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.184 [2024-11-29 09:46:00.466388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:33.184 [2024-11-29 09:46:00.466665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:33.184 [2024-11-29 09:46:00.466687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:33.184 [2024-11-29 09:46:00.466697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.184 [2024-11-29 09:46:00.477546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:33.184 [2024-11-29 09:46:00.477620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:33.184 [2024-11-29 09:46:00.477633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:33.184 [2024-11-29 09:46:00.477643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.184 [2024-11-29 09:46:00.477743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:33.184 [2024-11-29 09:46:00.477754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:33.184 [2024-11-29 09:46:00.477764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:33.184 [2024-11-29 09:46:00.477772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.184 [2024-11-29 09:46:00.477816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:33.184 [2024-11-29 09:46:00.477834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:33.184 [2024-11-29 09:46:00.477846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:33.184 [2024-11-29 09:46:00.477854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.184 [2024-11-29 09:46:00.477931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:33.184 [2024-11-29 09:46:00.477945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:33.184 [2024-11-29 09:46:00.477954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:33.184 [2024-11-29 09:46:00.477961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.184 [2024-11-29 09:46:00.477990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:33.184 [2024-11-29 09:46:00.478000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:33.184 [2024-11-29 09:46:00.478009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:33.184 [2024-11-29 09:46:00.478017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.184 [2024-11-29 09:46:00.478056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:33.184 [2024-11-29 09:46:00.478066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:33.184 [2024-11-29 09:46:00.478078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:33.184 [2024-11-29 09:46:00.478086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.184 [2024-11-29 09:46:00.478132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:33.184 [2024-11-29 09:46:00.478143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:33.184 [2024-11-29 09:46:00.478151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:33.184 [2024-11-29 09:46:00.478159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.184 [2024-11-29 09:46:00.478292] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 9723.209 ms, result 0 00:28:35.096 09:46:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:35.096 09:46:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:28:35.096 09:46:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:35.096 09:46:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:35.096 09:46:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:35.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:35.096 09:46:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=95807 00:28:35.096 09:46:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:35.096 09:46:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 95807 00:28:35.096 09:46:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 95807 ']' 00:28:35.096 09:46:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:35.096 09:46:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:35.096 09:46:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:35.096 09:46:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:35.096 09:46:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:35.096 09:46:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:35.096 [2024-11-29 09:46:02.811293] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:28:35.096 [2024-11-29 09:46:02.811750] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95807 ] 00:28:35.408 [2024-11-29 09:46:02.949638] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:35.408 [2024-11-29 09:46:02.980355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:35.408 [2024-11-29 09:46:03.009902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:35.669 [2024-11-29 09:46:03.318423] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:35.669 [2024-11-29 09:46:03.318727] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:35.932 [2024-11-29 09:46:03.471109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.932 [2024-11-29 09:46:03.471184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:35.932 [2024-11-29 09:46:03.471203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:35.932 [2024-11-29 09:46:03.471213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.932 [2024-11-29 09:46:03.471278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.932 [2024-11-29 09:46:03.471291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:35.932 [2024-11-29 09:46:03.471300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:28:35.932 [2024-11-29 09:46:03.471308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.932 [2024-11-29 09:46:03.471331] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:35.932 [2024-11-29 09:46:03.471649] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:35.932 [2024-11-29 09:46:03.471669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.932 [2024-11-29 09:46:03.471678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:35.932 [2024-11-29 09:46:03.471698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.343 ms 00:28:35.932 [2024-11-29 09:46:03.471710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.932 [2024-11-29 09:46:03.473524] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:28:35.932 [2024-11-29 09:46:03.477539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.932 [2024-11-29 09:46:03.477608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:28:35.932 [2024-11-29 09:46:03.477621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.018 ms 00:28:35.932 [2024-11-29 09:46:03.477639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.932 [2024-11-29 09:46:03.477731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.932 [2024-11-29 09:46:03.477743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:28:35.932 [2024-11-29 09:46:03.477753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:28:35.932 [2024-11-29 09:46:03.477761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.932 [2024-11-29 09:46:03.486153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.932 [2024-11-29 09:46:03.486199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:35.932 [2024-11-29 09:46:03.486211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.336 ms 00:28:35.932 [2024-11-29 09:46:03.486220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.932 [2024-11-29 09:46:03.486285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.932 [2024-11-29 09:46:03.486294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:35.932 [2024-11-29 09:46:03.486303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:28:35.932 [2024-11-29 09:46:03.486312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.932 [2024-11-29 09:46:03.486385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.932 [2024-11-29 09:46:03.486400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:35.932 [2024-11-29 09:46:03.486412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:28:35.932 [2024-11-29 09:46:03.486419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.932 [2024-11-29 09:46:03.486450] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:35.932 [2024-11-29 09:46:03.488424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.932 [2024-11-29 09:46:03.488636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:35.932 [2024-11-29 09:46:03.488654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.981 ms 00:28:35.932 [2024-11-29 09:46:03.488668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.932 [2024-11-29 09:46:03.488702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.932 [2024-11-29 09:46:03.488710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:35.932 [2024-11-29 09:46:03.488719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:28:35.932 [2024-11-29 09:46:03.488728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.932 [2024-11-29 09:46:03.488755] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:28:35.932 [2024-11-29 09:46:03.488781] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:28:35.932 [2024-11-29 09:46:03.488819] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:28:35.932 [2024-11-29 09:46:03.488843] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:28:35.932 [2024-11-29 09:46:03.488953] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:35.932 [2024-11-29 09:46:03.488966] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:35.932 [2024-11-29 09:46:03.488977] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:35.932 [2024-11-29 09:46:03.488990] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:35.932 [2024-11-29 09:46:03.489000] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:35.932 [2024-11-29 09:46:03.489009] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:35.932 [2024-11-29 09:46:03.489016] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:35.932 [2024-11-29 09:46:03.489024] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:35.932 [2024-11-29 09:46:03.489034] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:35.932 [2024-11-29 09:46:03.489045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.932 [2024-11-29 09:46:03.489053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:35.932 [2024-11-29 09:46:03.489062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.294 ms 00:28:35.932 [2024-11-29 09:46:03.489069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.932 [2024-11-29 09:46:03.489154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.932 [2024-11-29 09:46:03.489164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:35.932 [2024-11-29 09:46:03.489175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:28:35.932 [2024-11-29 09:46:03.489184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.932 [2024-11-29 09:46:03.489290] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:35.933 [2024-11-29 09:46:03.489308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:35.933 [2024-11-29 09:46:03.489317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:35.933 [2024-11-29 09:46:03.489325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:35.933 [2024-11-29 09:46:03.489334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:35.933 [2024-11-29 09:46:03.489343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:35.933 [2024-11-29 09:46:03.489352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:35.933 [2024-11-29 09:46:03.489359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:35.933 [2024-11-29 09:46:03.489368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:35.933 [2024-11-29 09:46:03.489376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:35.933 [2024-11-29 09:46:03.489386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:35.933 [2024-11-29 09:46:03.489394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:35.933 [2024-11-29 09:46:03.489403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:35.933 [2024-11-29 09:46:03.489442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:35.933 [2024-11-29 09:46:03.489459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:35.933 [2024-11-29 09:46:03.489467] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:35.933 [2024-11-29 09:46:03.489483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:35.933 [2024-11-29 09:46:03.489491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:35.933 [2024-11-29 09:46:03.489499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:35.933 [2024-11-29 09:46:03.489507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:35.933 [2024-11-29 09:46:03.489515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:35.933 [2024-11-29 09:46:03.489523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:35.933 [2024-11-29 09:46:03.489531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:35.933 [2024-11-29 09:46:03.489538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:35.933 [2024-11-29 09:46:03.489546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:35.933 [2024-11-29 09:46:03.489555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:35.933 [2024-11-29 09:46:03.489563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:35.933 [2024-11-29 09:46:03.489571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:35.933 [2024-11-29 09:46:03.489578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:35.933 [2024-11-29 09:46:03.489602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:35.933 [2024-11-29 09:46:03.489611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:35.933 [2024-11-29 09:46:03.489618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:35.933 [2024-11-29 09:46:03.489625] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:35.933 [2024-11-29 09:46:03.489632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:35.933 [2024-11-29 09:46:03.489638] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:35.933 [2024-11-29 09:46:03.489645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:35.933 [2024-11-29 09:46:03.489652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:35.933 [2024-11-29 09:46:03.489659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:35.933 [2024-11-29 09:46:03.489666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:35.933 [2024-11-29 09:46:03.489673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:35.933 [2024-11-29 09:46:03.489681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:35.933 [2024-11-29 09:46:03.489689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:35.933 [2024-11-29 09:46:03.489703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:35.933 [2024-11-29 09:46:03.489712] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:35.933 [2024-11-29 09:46:03.489720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:35.933 [2024-11-29 09:46:03.489728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:35.933 [2024-11-29 09:46:03.489738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:35.933 [2024-11-29 09:46:03.489746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:35.933 [2024-11-29 09:46:03.489754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:35.933 [2024-11-29 09:46:03.489760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:35.933 [2024-11-29 09:46:03.489767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:35.933 [2024-11-29 09:46:03.489775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:35.933 [2024-11-29 09:46:03.489782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:35.933 [2024-11-29 09:46:03.489790] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:35.933 [2024-11-29 09:46:03.489800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:35.933 [2024-11-29 09:46:03.489808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:35.933 [2024-11-29 09:46:03.489815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:35.933 [2024-11-29 09:46:03.489823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:35.933 [2024-11-29 09:46:03.489830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:35.933 [2024-11-29 09:46:03.489837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:35.933 [2024-11-29 09:46:03.489844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:35.933 [2024-11-29 09:46:03.489852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:35.933 [2024-11-29 09:46:03.489862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:35.933 [2024-11-29 09:46:03.489869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:35.933 [2024-11-29 09:46:03.489876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:35.933 [2024-11-29 09:46:03.489883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:35.933 [2024-11-29 09:46:03.489891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:35.933 [2024-11-29 09:46:03.489899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:35.933 [2024-11-29 09:46:03.489906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:35.934 [2024-11-29 09:46:03.489913] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:35.934 [2024-11-29 09:46:03.489921] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:35.934 [2024-11-29 09:46:03.489931] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:35.934 [2024-11-29 09:46:03.489938] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:35.934 [2024-11-29 09:46:03.489945] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:35.934 [2024-11-29 09:46:03.489956] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:35.934 [2024-11-29 09:46:03.489963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.934 [2024-11-29 09:46:03.489971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:35.934 [2024-11-29 09:46:03.489979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.745 ms 00:28:35.934 [2024-11-29 09:46:03.489989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.934 [2024-11-29 09:46:03.490035] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:28:35.934 [2024-11-29 09:46:03.490045] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:28:40.140 [2024-11-29 09:46:07.650576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.140 [2024-11-29 09:46:07.650685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:28:40.140 [2024-11-29 09:46:07.650713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4160.524 ms 00:28:40.140 [2024-11-29 09:46:07.650723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.140 [2024-11-29 09:46:07.666411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.140 [2024-11-29 09:46:07.666485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:40.140 [2024-11-29 09:46:07.666503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.551 ms 00:28:40.140 [2024-11-29 09:46:07.666513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.140 [2024-11-29 09:46:07.666647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.140 [2024-11-29 09:46:07.666667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:40.140 [2024-11-29 09:46:07.666695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:28:40.140 [2024-11-29 09:46:07.666706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.140 [2024-11-29 09:46:07.681102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.140 [2024-11-29 09:46:07.681170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:40.140 [2024-11-29 09:46:07.681185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.307 ms 00:28:40.141 [2024-11-29 09:46:07.681195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.141 [2024-11-29 09:46:07.681250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.141 [2024-11-29 09:46:07.681268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:40.141 [2024-11-29 09:46:07.681279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:40.141 [2024-11-29 09:46:07.681289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.141 [2024-11-29 09:46:07.682009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.141 [2024-11-29 09:46:07.682036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:40.141 [2024-11-29 09:46:07.682048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.651 ms 00:28:40.141 [2024-11-29 09:46:07.682058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.141 [2024-11-29 09:46:07.682131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.141 [2024-11-29 09:46:07.682142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:40.141 [2024-11-29 09:46:07.682156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:28:40.141 [2024-11-29 09:46:07.682168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.141 [2024-11-29 09:46:07.691953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.141 [2024-11-29 09:46:07.692197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:40.141 [2024-11-29 09:46:07.692218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.759 ms 00:28:40.141 [2024-11-29 09:46:07.692227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.141 [2024-11-29 09:46:07.710856] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:28:40.141 [2024-11-29 09:46:07.710987] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:28:40.141 [2024-11-29 09:46:07.711024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.141 [2024-11-29 09:46:07.711048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:28:40.141 [2024-11-29 09:46:07.711075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.638 ms 00:28:40.141 [2024-11-29 09:46:07.711095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.141 [2024-11-29 09:46:07.719292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.141 [2024-11-29 09:46:07.719356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:28:40.141 [2024-11-29 09:46:07.719370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.084 ms 00:28:40.141 [2024-11-29 09:46:07.719382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.141 [2024-11-29 09:46:07.722583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.141 [2024-11-29 09:46:07.722661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:28:40.141 [2024-11-29 09:46:07.722678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.125 ms 00:28:40.141 [2024-11-29 09:46:07.722693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.141 [2024-11-29 09:46:07.725266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.141 [2024-11-29 09:46:07.725485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:28:40.141 [2024-11-29 09:46:07.725507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.496 ms 00:28:40.141 [2024-11-29 09:46:07.725515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.141 [2024-11-29 09:46:07.725935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.141 [2024-11-29 09:46:07.725953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:40.141 [2024-11-29 09:46:07.725964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.310 ms 00:28:40.141 [2024-11-29 09:46:07.725976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.141 [2024-11-29 09:46:07.752904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.141 [2024-11-29 09:46:07.752990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:28:40.141 [2024-11-29 09:46:07.753007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.903 ms 00:28:40.141 [2024-11-29 09:46:07.753017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.141 [2024-11-29 09:46:07.761784] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:40.141 [2024-11-29 09:46:07.763057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.141 [2024-11-29 09:46:07.763113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:40.141 [2024-11-29 09:46:07.763127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.968 ms 00:28:40.141 [2024-11-29 09:46:07.763136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.141 [2024-11-29 09:46:07.763250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.141 [2024-11-29 09:46:07.763263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:28:40.141 [2024-11-29 09:46:07.763278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:28:40.141 [2024-11-29 09:46:07.763287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.141 [2024-11-29 09:46:07.763352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.141 [2024-11-29 09:46:07.763367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:40.141 [2024-11-29 09:46:07.763377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:28:40.141 [2024-11-29 09:46:07.763386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.141 [2024-11-29 09:46:07.763411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.141 [2024-11-29 09:46:07.763420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:40.141 [2024-11-29 09:46:07.763430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:28:40.141 [2024-11-29 09:46:07.763438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.141 [2024-11-29 09:46:07.763478] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:28:40.141 [2024-11-29 09:46:07.763492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.141 [2024-11-29 09:46:07.763508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:28:40.141 [2024-11-29 09:46:07.763520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:28:40.141 [2024-11-29 09:46:07.763529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.141 [2024-11-29 09:46:07.770332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.141 [2024-11-29 09:46:07.770394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:28:40.141 [2024-11-29 09:46:07.770408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.780 ms 00:28:40.141 [2024-11-29 09:46:07.770418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.141 [2024-11-29 09:46:07.770527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.141 [2024-11-29 09:46:07.770539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:40.141 [2024-11-29 09:46:07.770549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:28:40.141 [2024-11-29 09:46:07.770561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.141 [2024-11-29 09:46:07.772010] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4300.353 ms, result 0 00:28:40.141 [2024-11-29 09:46:07.784543] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:40.141 [2024-11-29 09:46:07.800574] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:40.141 [2024-11-29 09:46:07.808754] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:40.141 09:46:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:40.141 09:46:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:28:40.141 09:46:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:40.141 09:46:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:28:40.141 09:46:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:40.403 [2024-11-29 09:46:08.056886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.403 [2024-11-29 09:46:08.056961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:40.404 [2024-11-29 09:46:08.056978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:28:40.404 [2024-11-29 09:46:08.056989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.404 [2024-11-29 09:46:08.057017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.404 [2024-11-29 09:46:08.057052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:40.404 [2024-11-29 09:46:08.057061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:40.404 [2024-11-29 09:46:08.057070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.404 [2024-11-29 09:46:08.057091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.404 [2024-11-29 09:46:08.057101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:40.404 [2024-11-29 09:46:08.057110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:40.404 [2024-11-29 09:46:08.057118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.404 [2024-11-29 09:46:08.057186] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.297 ms, result 0 00:28:40.404 true 00:28:40.404 09:46:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:40.663 { 00:28:40.663 "name": "ftl", 00:28:40.663 "properties": [ 00:28:40.663 { 00:28:40.663 "name": "superblock_version", 00:28:40.663 "value": 5, 00:28:40.663 "read-only": true 00:28:40.663 }, 00:28:40.663 { 00:28:40.663 "name": "base_device", 00:28:40.663 "bands": [ 00:28:40.663 { 00:28:40.663 "id": 0, 00:28:40.663 "state": "CLOSED", 00:28:40.663 "validity": 1.0 00:28:40.663 }, 00:28:40.664 { 00:28:40.664 "id": 1, 00:28:40.664 "state": "CLOSED", 00:28:40.664 "validity": 1.0 00:28:40.664 }, 00:28:40.664 { 00:28:40.664 "id": 2, 00:28:40.664 "state": "CLOSED", 00:28:40.664 "validity": 0.007843137254901933 00:28:40.664 }, 00:28:40.664 { 00:28:40.664 "id": 3, 00:28:40.664 "state": "FREE", 00:28:40.664 "validity": 0.0 00:28:40.664 }, 00:28:40.664 { 00:28:40.664 "id": 4, 00:28:40.664 "state": "FREE", 00:28:40.664 "validity": 0.0 00:28:40.664 }, 00:28:40.664 { 00:28:40.664 "id": 5, 00:28:40.664 "state": "FREE", 00:28:40.664 "validity": 0.0 00:28:40.664 }, 00:28:40.664 { 00:28:40.664 "id": 6, 00:28:40.664 "state": "FREE", 00:28:40.664 "validity": 0.0 00:28:40.664 }, 00:28:40.664 { 00:28:40.664 "id": 7, 00:28:40.664 "state": "FREE", 00:28:40.664 "validity": 0.0 00:28:40.664 }, 00:28:40.664 { 00:28:40.664 "id": 8, 00:28:40.664 "state": "FREE", 00:28:40.664 "validity": 0.0 00:28:40.664 }, 00:28:40.664 { 00:28:40.664 "id": 9, 00:28:40.664 "state": "FREE", 00:28:40.664 "validity": 0.0 00:28:40.664 }, 00:28:40.664 { 00:28:40.664 "id": 10, 00:28:40.664 "state": "FREE", 00:28:40.664 "validity": 0.0 00:28:40.664 }, 00:28:40.664 { 00:28:40.664 "id": 11, 00:28:40.664 "state": "FREE", 00:28:40.664 "validity": 0.0 00:28:40.664 }, 00:28:40.664 { 00:28:40.664 "id": 12, 00:28:40.664 "state": "FREE", 00:28:40.664 "validity": 0.0 00:28:40.664 }, 00:28:40.664 { 00:28:40.664 "id": 13, 00:28:40.664 "state": "FREE", 00:28:40.664 "validity": 0.0 00:28:40.664 }, 00:28:40.664 { 00:28:40.664 "id": 14, 00:28:40.664 "state": "FREE", 00:28:40.664 "validity": 0.0 00:28:40.664 }, 00:28:40.664 { 00:28:40.664 "id": 15, 00:28:40.664 "state": "FREE", 00:28:40.664 "validity": 0.0 00:28:40.664 }, 00:28:40.664 { 00:28:40.664 "id": 16, 00:28:40.664 "state": "FREE", 00:28:40.664 "validity": 0.0 00:28:40.664 }, 00:28:40.664 { 00:28:40.664 "id": 17, 00:28:40.664 "state": "FREE", 00:28:40.664 "validity": 0.0 00:28:40.664 } 00:28:40.664 ], 00:28:40.664 "read-only": true 00:28:40.664 }, 00:28:40.664 { 00:28:40.664 "name": "cache_device", 00:28:40.664 "type": "bdev", 00:28:40.664 "chunks": [ 00:28:40.664 { 00:28:40.664 "id": 0, 00:28:40.664 "state": "INACTIVE", 00:28:40.664 "utilization": 0.0 00:28:40.664 }, 00:28:40.664 { 00:28:40.664 "id": 1, 00:28:40.664 "state": "OPEN", 00:28:40.664 "utilization": 0.0 00:28:40.664 }, 00:28:40.664 { 00:28:40.664 "id": 2, 00:28:40.664 "state": "OPEN", 00:28:40.664 "utilization": 0.0 00:28:40.664 }, 00:28:40.664 { 00:28:40.664 "id": 3, 00:28:40.664 "state": "FREE", 00:28:40.664 "utilization": 0.0 00:28:40.664 }, 00:28:40.664 { 00:28:40.664 "id": 4, 00:28:40.664 "state": "FREE", 00:28:40.664 "utilization": 0.0 00:28:40.664 } 00:28:40.664 ], 00:28:40.664 "read-only": true 00:28:40.664 }, 00:28:40.664 { 00:28:40.664 "name": "verbose_mode", 00:28:40.664 "value": true, 00:28:40.664 "unit": "", 00:28:40.664 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:40.664 }, 00:28:40.664 { 00:28:40.664 "name": "prep_upgrade_on_shutdown", 00:28:40.664 "value": false, 00:28:40.664 "unit": "", 00:28:40.664 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:40.664 } 00:28:40.664 ] 00:28:40.664 } 00:28:40.664 09:46:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:28:40.664 09:46:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:40.664 09:46:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:28:40.925 09:46:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:28:40.925 09:46:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:28:40.925 09:46:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:28:40.925 09:46:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:40.925 09:46:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:28:41.186 Validate MD5 checksum, iteration 1 00:28:41.186 09:46:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:28:41.186 09:46:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:28:41.186 09:46:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:28:41.186 09:46:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:28:41.186 09:46:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:28:41.186 09:46:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:41.186 09:46:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:28:41.186 09:46:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:41.186 09:46:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:41.186 09:46:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:41.186 09:46:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:41.186 09:46:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:41.186 09:46:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:41.186 [2024-11-29 09:46:08.836751] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:28:41.186 [2024-11-29 09:46:08.836902] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95886 ] 00:28:41.447 [2024-11-29 09:46:08.971801] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:41.447 [2024-11-29 09:46:09.002362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:41.447 [2024-11-29 09:46:09.032336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:42.831  [2024-11-29T09:46:11.495Z] Copying: 504/1024 [MB] (504 MBps) [2024-11-29T09:46:11.755Z] Copying: 901/1024 [MB] (397 MBps) [2024-11-29T09:46:12.698Z] Copying: 1024/1024 [MB] (average 439 MBps) 00:28:44.972 00:28:44.972 09:46:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:44.972 09:46:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:46.887 09:46:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:46.887 Validate MD5 checksum, iteration 2 00:28:46.887 09:46:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=ad2d4ba5282424c7a547918e49d89b89 00:28:46.887 09:46:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ ad2d4ba5282424c7a547918e49d89b89 != \a\d\2\d\4\b\a\5\2\8\2\4\2\4\c\7\a\5\4\7\9\1\8\e\4\9\d\8\9\b\8\9 ]] 00:28:46.887 09:46:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:46.887 09:46:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:46.887 09:46:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:46.887 09:46:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:46.887 09:46:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:46.887 09:46:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:46.887 09:46:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:46.887 09:46:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:46.887 09:46:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:47.150 [2024-11-29 09:46:14.663049] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:28:47.150 [2024-11-29 09:46:14.663170] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95943 ] 00:28:47.150 [2024-11-29 09:46:14.795814] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:47.150 [2024-11-29 09:46:14.825662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.150 [2024-11-29 09:46:14.845051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:48.538  [2024-11-29T09:46:17.206Z] Copying: 342/1024 [MB] (342 MBps) [2024-11-29T09:46:18.146Z] Copying: 731/1024 [MB] (389 MBps) [2024-11-29T09:46:22.333Z] Copying: 1024/1024 [MB] (average 387 MBps) 00:28:54.607 00:28:54.607 09:46:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:54.607 09:46:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:56.509 09:46:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:56.509 09:46:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=bb87a9e0a23c451959e361bd71eddde6 00:28:56.509 09:46:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ bb87a9e0a23c451959e361bd71eddde6 != \b\b\8\7\a\9\e\0\a\2\3\c\4\5\1\9\5\9\e\3\6\1\b\d\7\1\e\d\d\d\e\6 ]] 00:28:56.509 09:46:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:56.509 09:46:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:56.509 09:46:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:28:56.509 09:46:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 95807 ]] 00:28:56.509 09:46:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 95807 00:28:56.509 09:46:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:28:56.509 09:46:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:28:56.509 09:46:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:56.509 09:46:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:56.509 09:46:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:56.509 09:46:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=96043 00:28:56.509 09:46:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:56.509 09:46:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:56.509 09:46:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 96043 00:28:56.509 09:46:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 96043 ']' 00:28:56.509 09:46:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:56.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:56.509 09:46:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:56.509 09:46:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:56.509 09:46:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:56.509 09:46:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:56.509 [2024-11-29 09:46:24.018071] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:28:56.509 [2024-11-29 09:46:24.018183] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96043 ] 00:28:56.509 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 95807 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:28:56.509 [2024-11-29 09:46:24.150306] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:56.509 [2024-11-29 09:46:24.179380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:56.509 [2024-11-29 09:46:24.197635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:56.767 [2024-11-29 09:46:24.456012] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:56.767 [2024-11-29 09:46:24.456078] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:57.026 [2024-11-29 09:46:24.599030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.026 [2024-11-29 09:46:24.599087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:57.026 [2024-11-29 09:46:24.599104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:57.026 [2024-11-29 09:46:24.599112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.026 [2024-11-29 09:46:24.599163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.026 [2024-11-29 09:46:24.599176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:57.026 [2024-11-29 09:46:24.599184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:28:57.026 [2024-11-29 09:46:24.599191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.026 [2024-11-29 09:46:24.599212] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:57.026 [2024-11-29 09:46:24.599440] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:57.026 [2024-11-29 09:46:24.599454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.026 [2024-11-29 09:46:24.599462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:57.026 [2024-11-29 09:46:24.599470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.247 ms 00:28:57.026 [2024-11-29 09:46:24.599477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.026 [2024-11-29 09:46:24.599750] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:28:57.026 [2024-11-29 09:46:24.603209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.026 [2024-11-29 09:46:24.603370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:28:57.026 [2024-11-29 09:46:24.603387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.459 ms 00:28:57.026 [2024-11-29 09:46:24.603395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.026 [2024-11-29 09:46:24.604265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.026 [2024-11-29 09:46:24.604291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:28:57.026 [2024-11-29 09:46:24.604301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:28:57.026 [2024-11-29 09:46:24.604311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.026 [2024-11-29 09:46:24.604580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.026 [2024-11-29 09:46:24.604616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:57.026 [2024-11-29 09:46:24.604626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.207 ms 00:28:57.026 [2024-11-29 09:46:24.604633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.026 [2024-11-29 09:46:24.604666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.026 [2024-11-29 09:46:24.604674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:57.026 [2024-11-29 09:46:24.604681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:28:57.026 [2024-11-29 09:46:24.604688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.026 [2024-11-29 09:46:24.604714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.026 [2024-11-29 09:46:24.604725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:57.026 [2024-11-29 09:46:24.604734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:57.026 [2024-11-29 09:46:24.604741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.026 [2024-11-29 09:46:24.604760] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:57.027 [2024-11-29 09:46:24.605627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.027 [2024-11-29 09:46:24.605653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:57.027 [2024-11-29 09:46:24.605662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.871 ms 00:28:57.027 [2024-11-29 09:46:24.605669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.027 [2024-11-29 09:46:24.605698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.027 [2024-11-29 09:46:24.605706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:57.027 [2024-11-29 09:46:24.605714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:57.027 [2024-11-29 09:46:24.605721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.027 [2024-11-29 09:46:24.605741] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:28:57.027 [2024-11-29 09:46:24.605757] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:28:57.027 [2024-11-29 09:46:24.605790] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:28:57.027 [2024-11-29 09:46:24.605808] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:28:57.027 [2024-11-29 09:46:24.605908] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:57.027 [2024-11-29 09:46:24.605923] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:57.027 [2024-11-29 09:46:24.605933] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:57.027 [2024-11-29 09:46:24.605942] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:57.027 [2024-11-29 09:46:24.605951] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:57.027 [2024-11-29 09:46:24.605959] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:57.027 [2024-11-29 09:46:24.605965] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:57.027 [2024-11-29 09:46:24.605972] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:57.027 [2024-11-29 09:46:24.605978] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:57.027 [2024-11-29 09:46:24.605985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.027 [2024-11-29 09:46:24.605994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:57.027 [2024-11-29 09:46:24.606002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.246 ms 00:28:57.027 [2024-11-29 09:46:24.606009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.027 [2024-11-29 09:46:24.606092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.027 [2024-11-29 09:46:24.606102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:57.027 [2024-11-29 09:46:24.606109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:28:57.027 [2024-11-29 09:46:24.606116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.027 [2024-11-29 09:46:24.606222] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:57.027 [2024-11-29 09:46:24.606231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:57.027 [2024-11-29 09:46:24.606241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:57.027 [2024-11-29 09:46:24.606252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:57.027 [2024-11-29 09:46:24.606260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:57.027 [2024-11-29 09:46:24.606266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:57.027 [2024-11-29 09:46:24.606273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:57.027 [2024-11-29 09:46:24.606280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:57.027 [2024-11-29 09:46:24.606289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:57.027 [2024-11-29 09:46:24.606296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:57.027 [2024-11-29 09:46:24.606304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:57.027 [2024-11-29 09:46:24.606311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:57.027 [2024-11-29 09:46:24.606318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:57.027 [2024-11-29 09:46:24.606330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:57.027 [2024-11-29 09:46:24.606342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:57.027 [2024-11-29 09:46:24.606349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:57.027 [2024-11-29 09:46:24.606357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:57.027 [2024-11-29 09:46:24.606364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:57.027 [2024-11-29 09:46:24.606376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:57.027 [2024-11-29 09:46:24.606383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:57.027 [2024-11-29 09:46:24.606391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:57.027 [2024-11-29 09:46:24.606399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:57.027 [2024-11-29 09:46:24.606406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:57.027 [2024-11-29 09:46:24.606414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:57.027 [2024-11-29 09:46:24.606421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:57.027 [2024-11-29 09:46:24.606428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:57.027 [2024-11-29 09:46:24.606436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:57.027 [2024-11-29 09:46:24.606444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:57.027 [2024-11-29 09:46:24.606451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:57.027 [2024-11-29 09:46:24.606461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:57.027 [2024-11-29 09:46:24.606468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:57.027 [2024-11-29 09:46:24.606476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:57.027 [2024-11-29 09:46:24.606483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:57.027 [2024-11-29 09:46:24.606490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:57.027 [2024-11-29 09:46:24.606497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:57.027 [2024-11-29 09:46:24.606505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:57.027 [2024-11-29 09:46:24.606512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:57.027 [2024-11-29 09:46:24.606520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:57.027 [2024-11-29 09:46:24.606527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:57.027 [2024-11-29 09:46:24.606534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:57.027 [2024-11-29 09:46:24.606541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:57.027 [2024-11-29 09:46:24.606549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:57.027 [2024-11-29 09:46:24.606556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:57.027 [2024-11-29 09:46:24.606564] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:57.027 [2024-11-29 09:46:24.606573] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:57.027 [2024-11-29 09:46:24.606583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:57.027 [2024-11-29 09:46:24.606828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:57.027 [2024-11-29 09:46:24.606849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:57.027 [2024-11-29 09:46:24.606868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:57.027 [2024-11-29 09:46:24.606886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:57.027 [2024-11-29 09:46:24.606948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:57.027 [2024-11-29 09:46:24.606972] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:57.027 [2024-11-29 09:46:24.606990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:57.027 [2024-11-29 09:46:24.607010] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:57.027 [2024-11-29 09:46:24.607040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:57.027 [2024-11-29 09:46:24.607100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:57.027 [2024-11-29 09:46:24.607129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:57.027 [2024-11-29 09:46:24.607157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:57.027 [2024-11-29 09:46:24.607184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:57.027 [2024-11-29 09:46:24.607242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:57.027 [2024-11-29 09:46:24.607303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:57.027 [2024-11-29 09:46:24.607360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:57.027 [2024-11-29 09:46:24.607392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:57.027 [2024-11-29 09:46:24.607420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:57.027 [2024-11-29 09:46:24.607448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:57.027 [2024-11-29 09:46:24.607569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:57.027 [2024-11-29 09:46:24.607611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:57.027 [2024-11-29 09:46:24.607640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:57.027 [2024-11-29 09:46:24.607669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:57.027 [2024-11-29 09:46:24.607725] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:57.027 [2024-11-29 09:46:24.607756] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:57.028 [2024-11-29 09:46:24.607789] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:57.028 [2024-11-29 09:46:24.607817] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:57.028 [2024-11-29 09:46:24.607864] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:57.028 [2024-11-29 09:46:24.607895] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:57.028 [2024-11-29 09:46:24.607924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.028 [2024-11-29 09:46:24.607948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:57.028 [2024-11-29 09:46:24.607969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.770 ms 00:28:57.028 [2024-11-29 09:46:24.607988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.028 [2024-11-29 09:46:24.614307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.028 [2024-11-29 09:46:24.614338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:57.028 [2024-11-29 09:46:24.614348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.263 ms 00:28:57.028 [2024-11-29 09:46:24.614358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.028 [2024-11-29 09:46:24.614391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.028 [2024-11-29 09:46:24.614399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:57.028 [2024-11-29 09:46:24.614412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:28:57.028 [2024-11-29 09:46:24.614419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.028 [2024-11-29 09:46:24.622476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.028 [2024-11-29 09:46:24.622511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:57.028 [2024-11-29 09:46:24.622525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.015 ms 00:28:57.028 [2024-11-29 09:46:24.622532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.028 [2024-11-29 09:46:24.622557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.028 [2024-11-29 09:46:24.622567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:57.028 [2024-11-29 09:46:24.622578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:57.028 [2024-11-29 09:46:24.622615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.028 [2024-11-29 09:46:24.622690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.028 [2024-11-29 09:46:24.622706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:57.028 [2024-11-29 09:46:24.622714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:28:57.028 [2024-11-29 09:46:24.622721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.028 [2024-11-29 09:46:24.622760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.028 [2024-11-29 09:46:24.622769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:57.028 [2024-11-29 09:46:24.622778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:28:57.028 [2024-11-29 09:46:24.622785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.028 [2024-11-29 09:46:24.627865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.028 [2024-11-29 09:46:24.627894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:57.028 [2024-11-29 09:46:24.627902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.063 ms 00:28:57.028 [2024-11-29 09:46:24.627910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.028 [2024-11-29 09:46:24.627999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.028 [2024-11-29 09:46:24.628012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:28:57.028 [2024-11-29 09:46:24.628023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:28:57.028 [2024-11-29 09:46:24.628030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.028 [2024-11-29 09:46:24.637512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.028 [2024-11-29 09:46:24.637549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:28:57.028 [2024-11-29 09:46:24.637561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.463 ms 00:28:57.028 [2024-11-29 09:46:24.637569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.028 [2024-11-29 09:46:24.638848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.028 [2024-11-29 09:46:24.638879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:57.028 [2024-11-29 09:46:24.638889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.272 ms 00:28:57.028 [2024-11-29 09:46:24.638896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.028 [2024-11-29 09:46:24.653643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.028 [2024-11-29 09:46:24.653688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:28:57.028 [2024-11-29 09:46:24.653704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.713 ms 00:28:57.028 [2024-11-29 09:46:24.653714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.028 [2024-11-29 09:46:24.653841] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:28:57.028 [2024-11-29 09:46:24.653933] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:28:57.028 [2024-11-29 09:46:24.654012] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:28:57.028 [2024-11-29 09:46:24.654090] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:28:57.028 [2024-11-29 09:46:24.654098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.028 [2024-11-29 09:46:24.654108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:28:57.028 [2024-11-29 09:46:24.654116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.343 ms 00:28:57.028 [2024-11-29 09:46:24.654123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.028 [2024-11-29 09:46:24.654164] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:28:57.028 [2024-11-29 09:46:24.654175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.028 [2024-11-29 09:46:24.654189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:28:57.028 [2024-11-29 09:46:24.654197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:28:57.028 [2024-11-29 09:46:24.654204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.028 [2024-11-29 09:46:24.656209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.028 [2024-11-29 09:46:24.656242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:28:57.028 [2024-11-29 09:46:24.656253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.986 ms 00:28:57.028 [2024-11-29 09:46:24.656261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.028 [2024-11-29 09:46:24.656828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.028 [2024-11-29 09:46:24.656957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:28:57.028 [2024-11-29 09:46:24.656978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:28:57.028 [2024-11-29 09:46:24.656986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.028 [2024-11-29 09:46:24.657051] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:28:57.028 [2024-11-29 09:46:24.657187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.028 [2024-11-29 09:46:24.657199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:28:57.028 [2024-11-29 09:46:24.657207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.138 ms 00:28:57.028 [2024-11-29 09:46:24.657220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.593 [2024-11-29 09:46:25.080957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.593 [2024-11-29 09:46:25.081018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:28:57.593 [2024-11-29 09:46:25.081033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 423.455 ms 00:28:57.593 [2024-11-29 09:46:25.081041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.593 [2024-11-29 09:46:25.082245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.593 [2024-11-29 09:46:25.082299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:28:57.593 [2024-11-29 09:46:25.082316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.777 ms 00:28:57.593 [2024-11-29 09:46:25.082329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.593 [2024-11-29 09:46:25.082663] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:28:57.593 [2024-11-29 09:46:25.082691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.593 [2024-11-29 09:46:25.082701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:28:57.593 [2024-11-29 09:46:25.082709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.323 ms 00:28:57.593 [2024-11-29 09:46:25.082717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.593 [2024-11-29 09:46:25.082749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.593 [2024-11-29 09:46:25.082764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:28:57.593 [2024-11-29 09:46:25.082772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:57.593 [2024-11-29 09:46:25.082780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.593 [2024-11-29 09:46:25.082811] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 425.760 ms, result 0 00:28:57.593 [2024-11-29 09:46:25.082846] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:28:57.593 [2024-11-29 09:46:25.082926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.593 [2024-11-29 09:46:25.082935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:28:57.593 [2024-11-29 09:46:25.082947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.081 ms 00:28:57.593 [2024-11-29 09:46:25.082956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.850 [2024-11-29 09:46:25.496497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.850 [2024-11-29 09:46:25.496804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:28:57.850 [2024-11-29 09:46:25.496843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 413.138 ms 00:28:57.850 [2024-11-29 09:46:25.496860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.850 [2024-11-29 09:46:25.498265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.850 [2024-11-29 09:46:25.498312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:28:57.850 [2024-11-29 09:46:25.498330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.841 ms 00:28:57.850 [2024-11-29 09:46:25.498344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.850 [2024-11-29 09:46:25.498737] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:28:57.850 [2024-11-29 09:46:25.498789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.850 [2024-11-29 09:46:25.498804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:28:57.850 [2024-11-29 09:46:25.498818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.413 ms 00:28:57.850 [2024-11-29 09:46:25.498832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.850 [2024-11-29 09:46:25.498883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.850 [2024-11-29 09:46:25.498899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:28:57.850 [2024-11-29 09:46:25.498913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:57.850 [2024-11-29 09:46:25.498925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.850 [2024-11-29 09:46:25.498993] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 416.123 ms, result 0 00:28:57.850 [2024-11-29 09:46:25.499060] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:57.850 [2024-11-29 09:46:25.499078] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:28:57.850 [2024-11-29 09:46:25.499095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.850 [2024-11-29 09:46:25.499110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:28:57.850 [2024-11-29 09:46:25.499130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 842.060 ms 00:28:57.851 [2024-11-29 09:46:25.499144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.851 [2024-11-29 09:46:25.499202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.851 [2024-11-29 09:46:25.499218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:28:57.851 [2024-11-29 09:46:25.499242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:57.851 [2024-11-29 09:46:25.499255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.851 [2024-11-29 09:46:25.510011] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:57.851 [2024-11-29 09:46:25.510122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.851 [2024-11-29 09:46:25.510134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:57.851 [2024-11-29 09:46:25.510143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.838 ms 00:28:57.851 [2024-11-29 09:46:25.510151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.851 [2024-11-29 09:46:25.510856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.851 [2024-11-29 09:46:25.510973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:28:57.851 [2024-11-29 09:46:25.510993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.637 ms 00:28:57.851 [2024-11-29 09:46:25.511001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.851 [2024-11-29 09:46:25.513254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.851 [2024-11-29 09:46:25.513272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:28:57.851 [2024-11-29 09:46:25.513281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.231 ms 00:28:57.851 [2024-11-29 09:46:25.513290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.851 [2024-11-29 09:46:25.513335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.851 [2024-11-29 09:46:25.513344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:28:57.851 [2024-11-29 09:46:25.513352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:57.851 [2024-11-29 09:46:25.513360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.851 [2024-11-29 09:46:25.513469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.851 [2024-11-29 09:46:25.513480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:57.851 [2024-11-29 09:46:25.513488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:28:57.851 [2024-11-29 09:46:25.513496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.851 [2024-11-29 09:46:25.513516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.851 [2024-11-29 09:46:25.513527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:57.851 [2024-11-29 09:46:25.513538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:57.851 [2024-11-29 09:46:25.513545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.851 [2024-11-29 09:46:25.513570] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:28:57.851 [2024-11-29 09:46:25.513578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.851 [2024-11-29 09:46:25.513600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:28:57.851 [2024-11-29 09:46:25.513613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:28:57.851 [2024-11-29 09:46:25.513621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.851 [2024-11-29 09:46:25.513672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.851 [2024-11-29 09:46:25.513680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:57.851 [2024-11-29 09:46:25.513688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:28:57.851 [2024-11-29 09:46:25.513695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.851 [2024-11-29 09:46:25.514511] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 915.087 ms, result 0 00:28:57.851 [2024-11-29 09:46:25.526947] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:57.851 [2024-11-29 09:46:25.542953] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:57.851 [2024-11-29 09:46:25.551052] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:58.109 Validate MD5 checksum, iteration 1 00:28:58.109 09:46:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:58.109 09:46:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:28:58.109 09:46:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:58.109 09:46:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:28:58.109 09:46:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:28:58.109 09:46:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:28:58.109 09:46:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:28:58.109 09:46:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:58.109 09:46:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:28:58.109 09:46:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:58.109 09:46:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:58.109 09:46:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:58.109 09:46:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:58.109 09:46:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:58.109 09:46:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:58.109 [2024-11-29 09:46:25.648376] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:28:58.109 [2024-11-29 09:46:25.648490] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96069 ] 00:28:58.109 [2024-11-29 09:46:25.779732] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:58.109 [2024-11-29 09:46:25.810214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.109 [2024-11-29 09:46:25.829184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:59.493  [2024-11-29T09:46:27.786Z] Copying: 687/1024 [MB] (687 MBps) [2024-11-29T09:46:28.358Z] Copying: 1024/1024 [MB] (average 686 MBps) 00:29:00.632 00:29:00.632 09:46:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:29:00.632 09:46:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:03.176 09:46:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:03.176 Validate MD5 checksum, iteration 2 00:29:03.176 09:46:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=ad2d4ba5282424c7a547918e49d89b89 00:29:03.176 09:46:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ ad2d4ba5282424c7a547918e49d89b89 != \a\d\2\d\4\b\a\5\2\8\2\4\2\4\c\7\a\5\4\7\9\1\8\e\4\9\d\8\9\b\8\9 ]] 00:29:03.176 09:46:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:03.176 09:46:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:03.176 09:46:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:29:03.176 09:46:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:03.176 09:46:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:03.176 09:46:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:03.176 09:46:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:03.176 09:46:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:03.176 09:46:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:03.176 [2024-11-29 09:46:30.469579] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:29:03.176 [2024-11-29 09:46:30.469701] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96126 ] 00:29:03.176 [2024-11-29 09:46:30.600154] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:03.176 [2024-11-29 09:46:30.631504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:03.176 [2024-11-29 09:46:30.649870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:04.559  [2024-11-29T09:46:32.545Z] Copying: 696/1024 [MB] (696 MBps) [2024-11-29T09:46:33.116Z] Copying: 1024/1024 [MB] (average 696 MBps) 00:29:05.390 00:29:05.390 09:46:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:29:05.390 09:46:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:07.302 09:46:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:07.302 09:46:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=bb87a9e0a23c451959e361bd71eddde6 00:29:07.302 09:46:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ bb87a9e0a23c451959e361bd71eddde6 != \b\b\8\7\a\9\e\0\a\2\3\c\4\5\1\9\5\9\e\3\6\1\b\d\7\1\e\d\d\d\e\6 ]] 00:29:07.302 09:46:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:07.302 09:46:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:07.302 09:46:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:29:07.302 09:46:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:29:07.302 09:46:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:29:07.302 09:46:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:07.563 09:46:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:29:07.563 09:46:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:29:07.563 09:46:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:29:07.563 09:46:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:29:07.563 09:46:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 96043 ]] 00:29:07.563 09:46:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 96043 00:29:07.563 09:46:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 96043 ']' 00:29:07.563 09:46:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 96043 00:29:07.563 09:46:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:29:07.563 09:46:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:07.563 09:46:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96043 00:29:07.563 killing process with pid 96043 00:29:07.563 09:46:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:07.563 09:46:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:07.563 09:46:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96043' 00:29:07.563 09:46:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 96043 00:29:07.563 09:46:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 96043 00:29:07.563 [2024-11-29 09:46:35.285762] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:29:07.821 [2024-11-29 09:46:35.289004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:07.821 [2024-11-29 09:46:35.289045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:29:07.821 [2024-11-29 09:46:35.289058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:07.821 [2024-11-29 09:46:35.289066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:07.821 [2024-11-29 09:46:35.289092] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:29:07.821 [2024-11-29 09:46:35.289511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:07.821 [2024-11-29 09:46:35.289525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:29:07.821 [2024-11-29 09:46:35.289537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.405 ms 00:29:07.821 [2024-11-29 09:46:35.289544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:07.821 [2024-11-29 09:46:35.289803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:07.821 [2024-11-29 09:46:35.289819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:29:07.821 [2024-11-29 09:46:35.289829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.239 ms 00:29:07.821 [2024-11-29 09:46:35.289837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:07.821 [2024-11-29 09:46:35.290994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:07.821 [2024-11-29 09:46:35.291020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:29:07.821 [2024-11-29 09:46:35.291029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.140 ms 00:29:07.821 [2024-11-29 09:46:35.291037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:07.821 [2024-11-29 09:46:35.292233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:07.821 [2024-11-29 09:46:35.292254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:29:07.821 [2024-11-29 09:46:35.292263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.165 ms 00:29:07.821 [2024-11-29 09:46:35.292271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:07.821 [2024-11-29 09:46:35.293445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:07.821 [2024-11-29 09:46:35.293490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:29:07.821 [2024-11-29 09:46:35.293500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.142 ms 00:29:07.821 [2024-11-29 09:46:35.293512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:07.821 [2024-11-29 09:46:35.294612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:07.821 [2024-11-29 09:46:35.294752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:29:07.821 [2024-11-29 09:46:35.294767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.069 ms 00:29:07.821 [2024-11-29 09:46:35.294774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:07.821 [2024-11-29 09:46:35.294874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:07.821 [2024-11-29 09:46:35.294884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:29:07.821 [2024-11-29 09:46:35.294892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.067 ms 00:29:07.821 [2024-11-29 09:46:35.294899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:07.821 [2024-11-29 09:46:35.295916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:07.821 [2024-11-29 09:46:35.295934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:29:07.821 [2024-11-29 09:46:35.295942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.998 ms 00:29:07.821 [2024-11-29 09:46:35.295949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:07.821 [2024-11-29 09:46:35.297040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:07.821 [2024-11-29 09:46:35.297065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:29:07.821 [2024-11-29 09:46:35.297073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.060 ms 00:29:07.821 [2024-11-29 09:46:35.297080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:07.821 [2024-11-29 09:46:35.298037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:07.821 [2024-11-29 09:46:35.298055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:29:07.821 [2024-11-29 09:46:35.298063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.927 ms 00:29:07.821 [2024-11-29 09:46:35.298070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:07.821 [2024-11-29 09:46:35.299089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:07.821 [2024-11-29 09:46:35.299180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:29:07.821 [2024-11-29 09:46:35.299228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.963 ms 00:29:07.821 [2024-11-29 09:46:35.299249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:07.821 [2024-11-29 09:46:35.299287] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:29:07.821 [2024-11-29 09:46:35.299313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:07.821 [2024-11-29 09:46:35.299374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:29:07.821 [2024-11-29 09:46:35.299404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:29:07.821 [2024-11-29 09:46:35.299432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:07.821 [2024-11-29 09:46:35.299486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:07.821 [2024-11-29 09:46:35.299516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:07.821 [2024-11-29 09:46:35.299544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:07.821 [2024-11-29 09:46:35.299572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:07.821 [2024-11-29 09:46:35.299613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:07.821 [2024-11-29 09:46:35.299642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:07.821 [2024-11-29 09:46:35.299670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:07.821 [2024-11-29 09:46:35.299726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:07.821 [2024-11-29 09:46:35.299758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:07.821 [2024-11-29 09:46:35.299785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:07.821 [2024-11-29 09:46:35.299813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:07.821 [2024-11-29 09:46:35.299843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:07.821 [2024-11-29 09:46:35.299870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:07.821 [2024-11-29 09:46:35.299898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:07.821 [2024-11-29 09:46:35.299928] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:29:07.821 [2024-11-29 09:46:35.300034] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 3b63d20b-201e-4bb7-878a-090426208ff8 00:29:07.821 [2024-11-29 09:46:35.300075] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:29:07.822 [2024-11-29 09:46:35.300093] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:29:07.822 [2024-11-29 09:46:35.300111] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:29:07.822 [2024-11-29 09:46:35.300130] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:29:07.822 [2024-11-29 09:46:35.300148] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:29:07.822 [2024-11-29 09:46:35.300167] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:29:07.822 [2024-11-29 09:46:35.300185] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:29:07.822 [2024-11-29 09:46:35.300203] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:29:07.822 [2024-11-29 09:46:35.300228] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:29:07.822 [2024-11-29 09:46:35.300246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:07.822 [2024-11-29 09:46:35.300379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:29:07.822 [2024-11-29 09:46:35.300399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.960 ms 00:29:07.822 [2024-11-29 09:46:35.300422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:07.822 [2024-11-29 09:46:35.301745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:07.822 [2024-11-29 09:46:35.302159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:29:07.822 [2024-11-29 09:46:35.302239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.291 ms 00:29:07.822 [2024-11-29 09:46:35.302264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:07.822 [2024-11-29 09:46:35.302365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:07.822 [2024-11-29 09:46:35.302390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:29:07.822 [2024-11-29 09:46:35.302460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.061 ms 00:29:07.822 [2024-11-29 09:46:35.302483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:07.822 [2024-11-29 09:46:35.307518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:07.822 [2024-11-29 09:46:35.307637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:07.822 [2024-11-29 09:46:35.307651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:07.822 [2024-11-29 09:46:35.307659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:07.822 [2024-11-29 09:46:35.307695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:07.822 [2024-11-29 09:46:35.307703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:07.822 [2024-11-29 09:46:35.307710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:07.822 [2024-11-29 09:46:35.307717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:07.822 [2024-11-29 09:46:35.307795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:07.822 [2024-11-29 09:46:35.307806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:07.822 [2024-11-29 09:46:35.307813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:07.822 [2024-11-29 09:46:35.307820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:07.822 [2024-11-29 09:46:35.307837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:07.822 [2024-11-29 09:46:35.307848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:07.822 [2024-11-29 09:46:35.307861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:07.822 [2024-11-29 09:46:35.307868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:07.822 [2024-11-29 09:46:35.316776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:07.822 [2024-11-29 09:46:35.316811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:07.822 [2024-11-29 09:46:35.316821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:07.822 [2024-11-29 09:46:35.316829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:07.822 [2024-11-29 09:46:35.323479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:07.822 [2024-11-29 09:46:35.323516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:07.822 [2024-11-29 09:46:35.323526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:07.822 [2024-11-29 09:46:35.323533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:07.822 [2024-11-29 09:46:35.323577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:07.822 [2024-11-29 09:46:35.323597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:07.822 [2024-11-29 09:46:35.323605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:07.822 [2024-11-29 09:46:35.323612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:07.822 [2024-11-29 09:46:35.323664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:07.822 [2024-11-29 09:46:35.323678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:07.822 [2024-11-29 09:46:35.323688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:07.822 [2024-11-29 09:46:35.323695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:07.822 [2024-11-29 09:46:35.323756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:07.822 [2024-11-29 09:46:35.323770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:07.822 [2024-11-29 09:46:35.323778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:07.822 [2024-11-29 09:46:35.323784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:07.822 [2024-11-29 09:46:35.323817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:07.822 [2024-11-29 09:46:35.323826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:29:07.822 [2024-11-29 09:46:35.323834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:07.822 [2024-11-29 09:46:35.323844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:07.822 [2024-11-29 09:46:35.323880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:07.822 [2024-11-29 09:46:35.323888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:07.822 [2024-11-29 09:46:35.323896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:07.822 [2024-11-29 09:46:35.323903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:07.822 [2024-11-29 09:46:35.323945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:07.822 [2024-11-29 09:46:35.323954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:07.822 [2024-11-29 09:46:35.323964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:07.822 [2024-11-29 09:46:35.323971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:07.822 [2024-11-29 09:46:35.324086] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 35.055 ms, result 0 00:29:07.822 09:46:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:29:07.822 09:46:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:07.822 09:46:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:29:07.822 09:46:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:29:07.822 09:46:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:29:07.822 09:46:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:07.822 Remove shared memory files 00:29:07.822 09:46:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:29:07.822 09:46:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:07.822 09:46:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:29:07.822 09:46:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:29:07.822 09:46:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid95807 00:29:07.822 09:46:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:07.822 09:46:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:29:07.822 00:29:07.822 real 1m19.107s 00:29:07.822 user 1m47.297s 00:29:07.822 sys 0m20.420s 00:29:07.822 09:46:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:07.822 09:46:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:07.822 ************************************ 00:29:07.822 END TEST ftl_upgrade_shutdown 00:29:07.822 ************************************ 00:29:07.822 09:46:35 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:29:07.822 09:46:35 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:29:07.822 09:46:35 ftl -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:29:07.822 09:46:35 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:07.822 09:46:35 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:08.080 ************************************ 00:29:08.080 START TEST ftl_restore_fast 00:29:08.080 ************************************ 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:29:08.080 * Looking for test storage... 00:29:08.080 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lcov --version 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:08.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.080 --rc genhtml_branch_coverage=1 00:29:08.080 --rc genhtml_function_coverage=1 00:29:08.080 --rc genhtml_legend=1 00:29:08.080 --rc geninfo_all_blocks=1 00:29:08.080 --rc geninfo_unexecuted_blocks=1 00:29:08.080 00:29:08.080 ' 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:08.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.080 --rc genhtml_branch_coverage=1 00:29:08.080 --rc genhtml_function_coverage=1 00:29:08.080 --rc genhtml_legend=1 00:29:08.080 --rc geninfo_all_blocks=1 00:29:08.080 --rc geninfo_unexecuted_blocks=1 00:29:08.080 00:29:08.080 ' 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:08.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.080 --rc genhtml_branch_coverage=1 00:29:08.080 --rc genhtml_function_coverage=1 00:29:08.080 --rc genhtml_legend=1 00:29:08.080 --rc geninfo_all_blocks=1 00:29:08.080 --rc geninfo_unexecuted_blocks=1 00:29:08.080 00:29:08.080 ' 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:08.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.080 --rc genhtml_branch_coverage=1 00:29:08.080 --rc genhtml_function_coverage=1 00:29:08.080 --rc genhtml_legend=1 00:29:08.080 --rc geninfo_all_blocks=1 00:29:08.080 --rc geninfo_unexecuted_blocks=1 00:29:08.080 00:29:08.080 ' 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:29:08.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.rcuRVRbFQw 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:29:08.080 09:46:35 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:29:08.081 09:46:35 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:29:08.081 09:46:35 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:29:08.081 09:46:35 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:29:08.081 09:46:35 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:29:08.081 09:46:35 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:29:08.081 09:46:35 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:29:08.081 09:46:35 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:29:08.081 09:46:35 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=96254 00:29:08.081 09:46:35 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 96254 00:29:08.081 09:46:35 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # '[' -z 96254 ']' 00:29:08.081 09:46:35 ftl.ftl_restore_fast -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:08.081 09:46:35 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:08.081 09:46:35 ftl.ftl_restore_fast -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:08.081 09:46:35 ftl.ftl_restore_fast -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:08.081 09:46:35 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:08.081 09:46:35 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:29:08.081 [2024-11-29 09:46:35.785934] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:29:08.081 [2024-11-29 09:46:35.786543] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96254 ] 00:29:08.338 [2024-11-29 09:46:35.919721] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:08.338 [2024-11-29 09:46:35.946297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:08.338 [2024-11-29 09:46:35.962944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:08.902 09:46:36 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:08.902 09:46:36 ftl.ftl_restore_fast -- common/autotest_common.sh@868 -- # return 0 00:29:08.902 09:46:36 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:29:08.902 09:46:36 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:29:08.902 09:46:36 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:08.902 09:46:36 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:29:08.902 09:46:36 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:29:08.903 09:46:36 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:29:09.160 09:46:36 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:29:09.160 09:46:36 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:29:09.160 09:46:36 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:29:09.160 09:46:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:29:09.160 09:46:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:09.160 09:46:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:29:09.160 09:46:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:29:09.160 09:46:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:29:09.418 09:46:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:09.418 { 00:29:09.418 "name": "nvme0n1", 00:29:09.418 "aliases": [ 00:29:09.418 "622e3b41-9836-4942-af81-091df8ce2203" 00:29:09.418 ], 00:29:09.418 "product_name": "NVMe disk", 00:29:09.418 "block_size": 4096, 00:29:09.418 "num_blocks": 1310720, 00:29:09.418 "uuid": "622e3b41-9836-4942-af81-091df8ce2203", 00:29:09.418 "numa_id": -1, 00:29:09.418 "assigned_rate_limits": { 00:29:09.418 "rw_ios_per_sec": 0, 00:29:09.418 "rw_mbytes_per_sec": 0, 00:29:09.418 "r_mbytes_per_sec": 0, 00:29:09.418 "w_mbytes_per_sec": 0 00:29:09.418 }, 00:29:09.418 "claimed": true, 00:29:09.418 "claim_type": "read_many_write_one", 00:29:09.418 "zoned": false, 00:29:09.418 "supported_io_types": { 00:29:09.418 "read": true, 00:29:09.418 "write": true, 00:29:09.418 "unmap": true, 00:29:09.418 "flush": true, 00:29:09.418 "reset": true, 00:29:09.418 "nvme_admin": true, 00:29:09.418 "nvme_io": true, 00:29:09.418 "nvme_io_md": false, 00:29:09.418 "write_zeroes": true, 00:29:09.418 "zcopy": false, 00:29:09.418 "get_zone_info": false, 00:29:09.418 "zone_management": false, 00:29:09.418 "zone_append": false, 00:29:09.418 "compare": true, 00:29:09.418 "compare_and_write": false, 00:29:09.418 "abort": true, 00:29:09.418 "seek_hole": false, 00:29:09.418 "seek_data": false, 00:29:09.418 "copy": true, 00:29:09.418 "nvme_iov_md": false 00:29:09.418 }, 00:29:09.418 "driver_specific": { 00:29:09.418 "nvme": [ 00:29:09.418 { 00:29:09.418 "pci_address": "0000:00:11.0", 00:29:09.418 "trid": { 00:29:09.418 "trtype": "PCIe", 00:29:09.418 "traddr": "0000:00:11.0" 00:29:09.418 }, 00:29:09.418 "ctrlr_data": { 00:29:09.418 "cntlid": 0, 00:29:09.418 "vendor_id": "0x1b36", 00:29:09.418 "model_number": "QEMU NVMe Ctrl", 00:29:09.418 "serial_number": "12341", 00:29:09.418 "firmware_revision": "8.0.0", 00:29:09.418 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:09.418 "oacs": { 00:29:09.418 "security": 0, 00:29:09.418 "format": 1, 00:29:09.418 "firmware": 0, 00:29:09.418 "ns_manage": 1 00:29:09.418 }, 00:29:09.418 "multi_ctrlr": false, 00:29:09.418 "ana_reporting": false 00:29:09.418 }, 00:29:09.418 "vs": { 00:29:09.418 "nvme_version": "1.4" 00:29:09.418 }, 00:29:09.418 "ns_data": { 00:29:09.418 "id": 1, 00:29:09.418 "can_share": false 00:29:09.418 } 00:29:09.418 } 00:29:09.418 ], 00:29:09.418 "mp_policy": "active_passive" 00:29:09.418 } 00:29:09.418 } 00:29:09.418 ]' 00:29:09.418 09:46:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:09.418 09:46:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:29:09.418 09:46:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:09.676 09:46:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=1310720 00:29:09.676 09:46:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:29:09.676 09:46:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 5120 00:29:09.676 09:46:37 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:29:09.676 09:46:37 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:29:09.676 09:46:37 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:29:09.676 09:46:37 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:09.676 09:46:37 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:09.676 09:46:37 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=dec2f55b-a9b7-4d46-8471-edf94b81977b 00:29:09.676 09:46:37 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:29:09.676 09:46:37 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u dec2f55b-a9b7-4d46-8471-edf94b81977b 00:29:09.935 09:46:37 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:29:10.193 09:46:37 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=a33f9ce0-c8c5-474d-bcd4-0cfe4b2b0f11 00:29:10.193 09:46:37 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u a33f9ce0-c8c5-474d-bcd4-0cfe4b2b0f11 00:29:10.451 09:46:38 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=b940f2c6-f4f0-4047-a772-290a20da5e9c 00:29:10.451 09:46:38 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:29:10.451 09:46:38 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 b940f2c6-f4f0-4047-a772-290a20da5e9c 00:29:10.451 09:46:38 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:29:10.451 09:46:38 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:10.451 09:46:38 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=b940f2c6-f4f0-4047-a772-290a20da5e9c 00:29:10.451 09:46:38 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:29:10.451 09:46:38 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size b940f2c6-f4f0-4047-a772-290a20da5e9c 00:29:10.451 09:46:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=b940f2c6-f4f0-4047-a772-290a20da5e9c 00:29:10.451 09:46:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:10.451 09:46:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:29:10.451 09:46:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:29:10.451 09:46:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b940f2c6-f4f0-4047-a772-290a20da5e9c 00:29:10.710 09:46:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:10.710 { 00:29:10.710 "name": "b940f2c6-f4f0-4047-a772-290a20da5e9c", 00:29:10.710 "aliases": [ 00:29:10.710 "lvs/nvme0n1p0" 00:29:10.710 ], 00:29:10.710 "product_name": "Logical Volume", 00:29:10.710 "block_size": 4096, 00:29:10.710 "num_blocks": 26476544, 00:29:10.710 "uuid": "b940f2c6-f4f0-4047-a772-290a20da5e9c", 00:29:10.710 "assigned_rate_limits": { 00:29:10.710 "rw_ios_per_sec": 0, 00:29:10.710 "rw_mbytes_per_sec": 0, 00:29:10.710 "r_mbytes_per_sec": 0, 00:29:10.710 "w_mbytes_per_sec": 0 00:29:10.710 }, 00:29:10.710 "claimed": false, 00:29:10.710 "zoned": false, 00:29:10.710 "supported_io_types": { 00:29:10.710 "read": true, 00:29:10.710 "write": true, 00:29:10.710 "unmap": true, 00:29:10.710 "flush": false, 00:29:10.710 "reset": true, 00:29:10.710 "nvme_admin": false, 00:29:10.710 "nvme_io": false, 00:29:10.710 "nvme_io_md": false, 00:29:10.710 "write_zeroes": true, 00:29:10.710 "zcopy": false, 00:29:10.710 "get_zone_info": false, 00:29:10.710 "zone_management": false, 00:29:10.710 "zone_append": false, 00:29:10.710 "compare": false, 00:29:10.710 "compare_and_write": false, 00:29:10.710 "abort": false, 00:29:10.710 "seek_hole": true, 00:29:10.710 "seek_data": true, 00:29:10.710 "copy": false, 00:29:10.710 "nvme_iov_md": false 00:29:10.710 }, 00:29:10.710 "driver_specific": { 00:29:10.710 "lvol": { 00:29:10.710 "lvol_store_uuid": "a33f9ce0-c8c5-474d-bcd4-0cfe4b2b0f11", 00:29:10.710 "base_bdev": "nvme0n1", 00:29:10.710 "thin_provision": true, 00:29:10.710 "num_allocated_clusters": 0, 00:29:10.710 "snapshot": false, 00:29:10.710 "clone": false, 00:29:10.710 "esnap_clone": false 00:29:10.710 } 00:29:10.710 } 00:29:10.710 } 00:29:10.710 ]' 00:29:10.710 09:46:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:10.710 09:46:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:29:10.710 09:46:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:10.710 09:46:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:10.710 09:46:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:10.710 09:46:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:29:10.710 09:46:38 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:29:10.710 09:46:38 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:29:10.710 09:46:38 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:29:10.969 09:46:38 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:29:10.969 09:46:38 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:29:10.969 09:46:38 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size b940f2c6-f4f0-4047-a772-290a20da5e9c 00:29:10.969 09:46:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=b940f2c6-f4f0-4047-a772-290a20da5e9c 00:29:10.969 09:46:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:10.969 09:46:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:29:10.969 09:46:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:29:10.969 09:46:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b940f2c6-f4f0-4047-a772-290a20da5e9c 00:29:11.227 09:46:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:11.227 { 00:29:11.227 "name": "b940f2c6-f4f0-4047-a772-290a20da5e9c", 00:29:11.227 "aliases": [ 00:29:11.228 "lvs/nvme0n1p0" 00:29:11.228 ], 00:29:11.228 "product_name": "Logical Volume", 00:29:11.228 "block_size": 4096, 00:29:11.228 "num_blocks": 26476544, 00:29:11.228 "uuid": "b940f2c6-f4f0-4047-a772-290a20da5e9c", 00:29:11.228 "assigned_rate_limits": { 00:29:11.228 "rw_ios_per_sec": 0, 00:29:11.228 "rw_mbytes_per_sec": 0, 00:29:11.228 "r_mbytes_per_sec": 0, 00:29:11.228 "w_mbytes_per_sec": 0 00:29:11.228 }, 00:29:11.228 "claimed": false, 00:29:11.228 "zoned": false, 00:29:11.228 "supported_io_types": { 00:29:11.228 "read": true, 00:29:11.228 "write": true, 00:29:11.228 "unmap": true, 00:29:11.228 "flush": false, 00:29:11.228 "reset": true, 00:29:11.228 "nvme_admin": false, 00:29:11.228 "nvme_io": false, 00:29:11.228 "nvme_io_md": false, 00:29:11.228 "write_zeroes": true, 00:29:11.228 "zcopy": false, 00:29:11.228 "get_zone_info": false, 00:29:11.228 "zone_management": false, 00:29:11.228 "zone_append": false, 00:29:11.228 "compare": false, 00:29:11.228 "compare_and_write": false, 00:29:11.228 "abort": false, 00:29:11.228 "seek_hole": true, 00:29:11.228 "seek_data": true, 00:29:11.228 "copy": false, 00:29:11.228 "nvme_iov_md": false 00:29:11.228 }, 00:29:11.228 "driver_specific": { 00:29:11.228 "lvol": { 00:29:11.228 "lvol_store_uuid": "a33f9ce0-c8c5-474d-bcd4-0cfe4b2b0f11", 00:29:11.228 "base_bdev": "nvme0n1", 00:29:11.228 "thin_provision": true, 00:29:11.228 "num_allocated_clusters": 0, 00:29:11.228 "snapshot": false, 00:29:11.228 "clone": false, 00:29:11.228 "esnap_clone": false 00:29:11.228 } 00:29:11.228 } 00:29:11.228 } 00:29:11.228 ]' 00:29:11.228 09:46:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:11.228 09:46:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:29:11.228 09:46:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:11.228 09:46:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:11.228 09:46:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:11.228 09:46:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:29:11.228 09:46:38 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:29:11.228 09:46:38 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:29:11.487 09:46:39 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:29:11.487 09:46:39 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size b940f2c6-f4f0-4047-a772-290a20da5e9c 00:29:11.487 09:46:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=b940f2c6-f4f0-4047-a772-290a20da5e9c 00:29:11.487 09:46:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:11.487 09:46:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:29:11.487 09:46:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:29:11.487 09:46:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b940f2c6-f4f0-4047-a772-290a20da5e9c 00:29:11.746 09:46:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:11.746 { 00:29:11.746 "name": "b940f2c6-f4f0-4047-a772-290a20da5e9c", 00:29:11.746 "aliases": [ 00:29:11.746 "lvs/nvme0n1p0" 00:29:11.746 ], 00:29:11.746 "product_name": "Logical Volume", 00:29:11.746 "block_size": 4096, 00:29:11.746 "num_blocks": 26476544, 00:29:11.746 "uuid": "b940f2c6-f4f0-4047-a772-290a20da5e9c", 00:29:11.746 "assigned_rate_limits": { 00:29:11.746 "rw_ios_per_sec": 0, 00:29:11.746 "rw_mbytes_per_sec": 0, 00:29:11.746 "r_mbytes_per_sec": 0, 00:29:11.746 "w_mbytes_per_sec": 0 00:29:11.746 }, 00:29:11.746 "claimed": false, 00:29:11.746 "zoned": false, 00:29:11.746 "supported_io_types": { 00:29:11.746 "read": true, 00:29:11.746 "write": true, 00:29:11.746 "unmap": true, 00:29:11.746 "flush": false, 00:29:11.746 "reset": true, 00:29:11.746 "nvme_admin": false, 00:29:11.746 "nvme_io": false, 00:29:11.746 "nvme_io_md": false, 00:29:11.746 "write_zeroes": true, 00:29:11.746 "zcopy": false, 00:29:11.746 "get_zone_info": false, 00:29:11.746 "zone_management": false, 00:29:11.746 "zone_append": false, 00:29:11.746 "compare": false, 00:29:11.746 "compare_and_write": false, 00:29:11.746 "abort": false, 00:29:11.746 "seek_hole": true, 00:29:11.746 "seek_data": true, 00:29:11.746 "copy": false, 00:29:11.746 "nvme_iov_md": false 00:29:11.746 }, 00:29:11.746 "driver_specific": { 00:29:11.746 "lvol": { 00:29:11.746 "lvol_store_uuid": "a33f9ce0-c8c5-474d-bcd4-0cfe4b2b0f11", 00:29:11.746 "base_bdev": "nvme0n1", 00:29:11.746 "thin_provision": true, 00:29:11.746 "num_allocated_clusters": 0, 00:29:11.746 "snapshot": false, 00:29:11.746 "clone": false, 00:29:11.746 "esnap_clone": false 00:29:11.746 } 00:29:11.746 } 00:29:11.746 } 00:29:11.746 ]' 00:29:11.746 09:46:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:11.746 09:46:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:29:11.746 09:46:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:11.746 09:46:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:11.746 09:46:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:11.746 09:46:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:29:11.746 09:46:39 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:29:11.746 09:46:39 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d b940f2c6-f4f0-4047-a772-290a20da5e9c --l2p_dram_limit 10' 00:29:11.746 09:46:39 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:29:11.746 09:46:39 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:29:11.746 09:46:39 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:29:11.746 09:46:39 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:29:11.746 09:46:39 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:29:11.746 09:46:39 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b940f2c6-f4f0-4047-a772-290a20da5e9c --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:29:12.006 [2024-11-29 09:46:39.502569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.006 [2024-11-29 09:46:39.502633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:12.006 [2024-11-29 09:46:39.502649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:12.006 [2024-11-29 09:46:39.502657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.006 [2024-11-29 09:46:39.502717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.006 [2024-11-29 09:46:39.502729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:12.006 [2024-11-29 09:46:39.502741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:29:12.006 [2024-11-29 09:46:39.502749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.006 [2024-11-29 09:46:39.502769] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:12.006 [2024-11-29 09:46:39.503113] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:12.006 [2024-11-29 09:46:39.503138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.006 [2024-11-29 09:46:39.503146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:12.006 [2024-11-29 09:46:39.503158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.373 ms 00:29:12.006 [2024-11-29 09:46:39.503169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.006 [2024-11-29 09:46:39.503200] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID f4181b08-3dfb-4e83-b6f4-aa68fb5795b9 00:29:12.006 [2024-11-29 09:46:39.504278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.006 [2024-11-29 09:46:39.504314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:29:12.006 [2024-11-29 09:46:39.504324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:29:12.006 [2024-11-29 09:46:39.504333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.006 [2024-11-29 09:46:39.509322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.006 [2024-11-29 09:46:39.509356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:12.006 [2024-11-29 09:46:39.509365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.918 ms 00:29:12.006 [2024-11-29 09:46:39.509379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.006 [2024-11-29 09:46:39.509472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.006 [2024-11-29 09:46:39.509484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:12.006 [2024-11-29 09:46:39.509492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:29:12.006 [2024-11-29 09:46:39.509501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.006 [2024-11-29 09:46:39.509538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.006 [2024-11-29 09:46:39.509550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:12.006 [2024-11-29 09:46:39.509558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:12.006 [2024-11-29 09:46:39.509566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.006 [2024-11-29 09:46:39.509599] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:12.006 [2024-11-29 09:46:39.510995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.006 [2024-11-29 09:46:39.511024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:12.006 [2024-11-29 09:46:39.511036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.410 ms 00:29:12.006 [2024-11-29 09:46:39.511043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.006 [2024-11-29 09:46:39.511075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.006 [2024-11-29 09:46:39.511086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:12.006 [2024-11-29 09:46:39.511098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:29:12.006 [2024-11-29 09:46:39.511105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.006 [2024-11-29 09:46:39.511123] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:29:12.006 [2024-11-29 09:46:39.511260] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:12.006 [2024-11-29 09:46:39.511280] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:12.006 [2024-11-29 09:46:39.511291] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:12.006 [2024-11-29 09:46:39.511312] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:12.006 [2024-11-29 09:46:39.511321] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:12.006 [2024-11-29 09:46:39.511333] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:12.006 [2024-11-29 09:46:39.511340] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:12.006 [2024-11-29 09:46:39.511348] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:12.006 [2024-11-29 09:46:39.511355] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:12.006 [2024-11-29 09:46:39.511364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.006 [2024-11-29 09:46:39.511371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:12.006 [2024-11-29 09:46:39.511380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.242 ms 00:29:12.006 [2024-11-29 09:46:39.511387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.006 [2024-11-29 09:46:39.511473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.006 [2024-11-29 09:46:39.511481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:12.006 [2024-11-29 09:46:39.511492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:29:12.007 [2024-11-29 09:46:39.511499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.007 [2024-11-29 09:46:39.511611] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:12.007 [2024-11-29 09:46:39.511631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:12.007 [2024-11-29 09:46:39.511645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:12.007 [2024-11-29 09:46:39.511654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:12.007 [2024-11-29 09:46:39.511664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:12.007 [2024-11-29 09:46:39.511672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:12.007 [2024-11-29 09:46:39.511682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:12.007 [2024-11-29 09:46:39.511690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:12.007 [2024-11-29 09:46:39.511700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:12.007 [2024-11-29 09:46:39.511708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:12.007 [2024-11-29 09:46:39.511717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:12.007 [2024-11-29 09:46:39.511724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:12.007 [2024-11-29 09:46:39.511735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:12.007 [2024-11-29 09:46:39.511744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:12.007 [2024-11-29 09:46:39.511753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:12.007 [2024-11-29 09:46:39.511761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:12.007 [2024-11-29 09:46:39.511772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:12.007 [2024-11-29 09:46:39.511780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:12.007 [2024-11-29 09:46:39.511789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:12.007 [2024-11-29 09:46:39.511797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:12.007 [2024-11-29 09:46:39.511807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:12.007 [2024-11-29 09:46:39.511814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:12.007 [2024-11-29 09:46:39.511823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:12.007 [2024-11-29 09:46:39.511830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:12.007 [2024-11-29 09:46:39.511840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:12.007 [2024-11-29 09:46:39.511848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:12.007 [2024-11-29 09:46:39.511856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:12.007 [2024-11-29 09:46:39.511864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:12.007 [2024-11-29 09:46:39.511874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:12.007 [2024-11-29 09:46:39.511882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:12.007 [2024-11-29 09:46:39.511891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:12.007 [2024-11-29 09:46:39.511898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:12.007 [2024-11-29 09:46:39.511908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:12.007 [2024-11-29 09:46:39.511915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:12.007 [2024-11-29 09:46:39.511925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:12.007 [2024-11-29 09:46:39.511932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:12.007 [2024-11-29 09:46:39.511941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:12.007 [2024-11-29 09:46:39.511949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:12.007 [2024-11-29 09:46:39.511958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:12.007 [2024-11-29 09:46:39.511965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:12.007 [2024-11-29 09:46:39.511974] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:12.007 [2024-11-29 09:46:39.511982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:12.007 [2024-11-29 09:46:39.511991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:12.007 [2024-11-29 09:46:39.511998] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:12.007 [2024-11-29 09:46:39.512010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:12.007 [2024-11-29 09:46:39.512018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:12.007 [2024-11-29 09:46:39.512028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:12.007 [2024-11-29 09:46:39.512039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:12.007 [2024-11-29 09:46:39.512049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:12.007 [2024-11-29 09:46:39.512056] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:12.007 [2024-11-29 09:46:39.512066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:12.007 [2024-11-29 09:46:39.512073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:12.007 [2024-11-29 09:46:39.512083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:12.007 [2024-11-29 09:46:39.512093] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:12.007 [2024-11-29 09:46:39.512105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:12.007 [2024-11-29 09:46:39.512114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:12.007 [2024-11-29 09:46:39.512123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:12.007 [2024-11-29 09:46:39.512130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:12.007 [2024-11-29 09:46:39.512139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:12.007 [2024-11-29 09:46:39.512146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:12.007 [2024-11-29 09:46:39.512156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:12.007 [2024-11-29 09:46:39.512163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:12.007 [2024-11-29 09:46:39.512171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:12.007 [2024-11-29 09:46:39.512178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:12.007 [2024-11-29 09:46:39.512187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:12.007 [2024-11-29 09:46:39.512194] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:12.007 [2024-11-29 09:46:39.512202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:12.007 [2024-11-29 09:46:39.512209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:12.007 [2024-11-29 09:46:39.512218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:12.007 [2024-11-29 09:46:39.512224] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:12.007 [2024-11-29 09:46:39.512234] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:12.007 [2024-11-29 09:46:39.512242] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:12.007 [2024-11-29 09:46:39.512250] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:12.007 [2024-11-29 09:46:39.512257] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:12.007 [2024-11-29 09:46:39.512266] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:12.007 [2024-11-29 09:46:39.512273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.007 [2024-11-29 09:46:39.512283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:12.007 [2024-11-29 09:46:39.512291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.740 ms 00:29:12.008 [2024-11-29 09:46:39.512299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.008 [2024-11-29 09:46:39.512336] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:29:12.008 [2024-11-29 09:46:39.512349] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:29:13.906 [2024-11-29 09:46:41.523969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.906 [2024-11-29 09:46:41.524034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:29:13.906 [2024-11-29 09:46:41.524049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2011.623 ms 00:29:13.906 [2024-11-29 09:46:41.524060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.906 [2024-11-29 09:46:41.532248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.906 [2024-11-29 09:46:41.532299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:13.906 [2024-11-29 09:46:41.532315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.110 ms 00:29:13.906 [2024-11-29 09:46:41.532330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.906 [2024-11-29 09:46:41.532441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.906 [2024-11-29 09:46:41.532453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:13.906 [2024-11-29 09:46:41.532462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:29:13.906 [2024-11-29 09:46:41.532471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.906 [2024-11-29 09:46:41.540912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.906 [2024-11-29 09:46:41.540960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:13.906 [2024-11-29 09:46:41.540972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.395 ms 00:29:13.906 [2024-11-29 09:46:41.540987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.906 [2024-11-29 09:46:41.541019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.906 [2024-11-29 09:46:41.541029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:13.906 [2024-11-29 09:46:41.541037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:29:13.906 [2024-11-29 09:46:41.541049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.906 [2024-11-29 09:46:41.541377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.906 [2024-11-29 09:46:41.541425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:13.906 [2024-11-29 09:46:41.541435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:29:13.906 [2024-11-29 09:46:41.541449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.906 [2024-11-29 09:46:41.541561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.906 [2024-11-29 09:46:41.541572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:13.906 [2024-11-29 09:46:41.541580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:29:13.906 [2024-11-29 09:46:41.541643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.906 [2024-11-29 09:46:41.546882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.906 [2024-11-29 09:46:41.546924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:13.906 [2024-11-29 09:46:41.546934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.220 ms 00:29:13.906 [2024-11-29 09:46:41.546943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.906 [2024-11-29 09:46:41.568805] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:13.906 [2024-11-29 09:46:41.571824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.906 [2024-11-29 09:46:41.571861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:13.906 [2024-11-29 09:46:41.571877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.793 ms 00:29:13.906 [2024-11-29 09:46:41.571891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.906 [2024-11-29 09:46:41.619225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.906 [2024-11-29 09:46:41.619345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:29:13.906 [2024-11-29 09:46:41.619388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.272 ms 00:29:13.906 [2024-11-29 09:46:41.619411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.906 [2024-11-29 09:46:41.619917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.906 [2024-11-29 09:46:41.619970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:13.906 [2024-11-29 09:46:41.619999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.401 ms 00:29:13.906 [2024-11-29 09:46:41.620019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.906 [2024-11-29 09:46:41.625911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.906 [2024-11-29 09:46:41.625998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:29:13.906 [2024-11-29 09:46:41.626029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.833 ms 00:29:13.906 [2024-11-29 09:46:41.626050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.164 [2024-11-29 09:46:41.630898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.164 [2024-11-29 09:46:41.630929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:29:14.164 [2024-11-29 09:46:41.630938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.715 ms 00:29:14.164 [2024-11-29 09:46:41.630944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.164 [2024-11-29 09:46:41.631176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.164 [2024-11-29 09:46:41.631193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:14.164 [2024-11-29 09:46:41.631202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.204 ms 00:29:14.164 [2024-11-29 09:46:41.631208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.164 [2024-11-29 09:46:41.658784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.164 [2024-11-29 09:46:41.658853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:29:14.164 [2024-11-29 09:46:41.658868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.548 ms 00:29:14.164 [2024-11-29 09:46:41.658880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.164 [2024-11-29 09:46:41.662535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.164 [2024-11-29 09:46:41.662577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:29:14.164 [2024-11-29 09:46:41.662614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.586 ms 00:29:14.164 [2024-11-29 09:46:41.662622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.164 [2024-11-29 09:46:41.665669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.164 [2024-11-29 09:46:41.665703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:29:14.164 [2024-11-29 09:46:41.665713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.008 ms 00:29:14.164 [2024-11-29 09:46:41.665721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.164 [2024-11-29 09:46:41.668724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.164 [2024-11-29 09:46:41.668760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:14.164 [2024-11-29 09:46:41.668774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.966 ms 00:29:14.164 [2024-11-29 09:46:41.668783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.164 [2024-11-29 09:46:41.668823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.164 [2024-11-29 09:46:41.668832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:14.164 [2024-11-29 09:46:41.668843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:14.164 [2024-11-29 09:46:41.668850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.164 [2024-11-29 09:46:41.668921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.164 [2024-11-29 09:46:41.668930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:14.164 [2024-11-29 09:46:41.668943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:29:14.164 [2024-11-29 09:46:41.668950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.164 [2024-11-29 09:46:41.669821] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2166.854 ms, result 0 00:29:14.164 { 00:29:14.164 "name": "ftl0", 00:29:14.165 "uuid": "f4181b08-3dfb-4e83-b6f4-aa68fb5795b9" 00:29:14.165 } 00:29:14.165 09:46:41 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:29:14.165 09:46:41 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:29:14.424 09:46:41 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:29:14.424 09:46:41 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:29:14.424 [2024-11-29 09:46:42.076685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.424 [2024-11-29 09:46:42.076746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:14.424 [2024-11-29 09:46:42.076759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:14.424 [2024-11-29 09:46:42.076774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.424 [2024-11-29 09:46:42.076798] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:14.424 [2024-11-29 09:46:42.077257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.424 [2024-11-29 09:46:42.077281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:14.424 [2024-11-29 09:46:42.077292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.439 ms 00:29:14.424 [2024-11-29 09:46:42.077300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.424 [2024-11-29 09:46:42.077578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.424 [2024-11-29 09:46:42.077610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:14.424 [2024-11-29 09:46:42.077623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:29:14.424 [2024-11-29 09:46:42.077636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.424 [2024-11-29 09:46:42.080861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.424 [2024-11-29 09:46:42.080883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:14.424 [2024-11-29 09:46:42.080894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.207 ms 00:29:14.424 [2024-11-29 09:46:42.080902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.424 [2024-11-29 09:46:42.087213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.424 [2024-11-29 09:46:42.087243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:14.424 [2024-11-29 09:46:42.087257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.290 ms 00:29:14.424 [2024-11-29 09:46:42.087265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.424 [2024-11-29 09:46:42.088822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.424 [2024-11-29 09:46:42.088856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:14.424 [2024-11-29 09:46:42.088867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.498 ms 00:29:14.424 [2024-11-29 09:46:42.088874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.424 [2024-11-29 09:46:42.092853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.424 [2024-11-29 09:46:42.092895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:14.424 [2024-11-29 09:46:42.092910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.579 ms 00:29:14.424 [2024-11-29 09:46:42.092918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.424 [2024-11-29 09:46:42.093040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.424 [2024-11-29 09:46:42.093057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:14.424 [2024-11-29 09:46:42.093068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:29:14.424 [2024-11-29 09:46:42.093076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.424 [2024-11-29 09:46:42.094910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.424 [2024-11-29 09:46:42.094942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:14.424 [2024-11-29 09:46:42.094955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.813 ms 00:29:14.424 [2024-11-29 09:46:42.094963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.425 [2024-11-29 09:46:42.096030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.425 [2024-11-29 09:46:42.096063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:14.425 [2024-11-29 09:46:42.096073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.030 ms 00:29:14.425 [2024-11-29 09:46:42.096080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.425 [2024-11-29 09:46:42.097012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.425 [2024-11-29 09:46:42.097043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:14.425 [2024-11-29 09:46:42.097053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.899 ms 00:29:14.425 [2024-11-29 09:46:42.097060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.425 [2024-11-29 09:46:42.097897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.425 [2024-11-29 09:46:42.097927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:14.425 [2024-11-29 09:46:42.097938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.780 ms 00:29:14.425 [2024-11-29 09:46:42.097944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.425 [2024-11-29 09:46:42.097976] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:14.425 [2024-11-29 09:46:42.097988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:14.425 [2024-11-29 09:46:42.098641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:14.426 [2024-11-29 09:46:42.098648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:14.426 [2024-11-29 09:46:42.098657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:14.426 [2024-11-29 09:46:42.098665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:14.426 [2024-11-29 09:46:42.098673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:14.426 [2024-11-29 09:46:42.098680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:14.426 [2024-11-29 09:46:42.098691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:14.426 [2024-11-29 09:46:42.098698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:14.426 [2024-11-29 09:46:42.098707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:14.426 [2024-11-29 09:46:42.098714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:14.426 [2024-11-29 09:46:42.098724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:14.426 [2024-11-29 09:46:42.098731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:14.426 [2024-11-29 09:46:42.098740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:14.426 [2024-11-29 09:46:42.098748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:14.426 [2024-11-29 09:46:42.098756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:14.426 [2024-11-29 09:46:42.098763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:14.426 [2024-11-29 09:46:42.098772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:14.426 [2024-11-29 09:46:42.098779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:14.426 [2024-11-29 09:46:42.098787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:14.426 [2024-11-29 09:46:42.098795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:14.426 [2024-11-29 09:46:42.098804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:14.426 [2024-11-29 09:46:42.098811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:14.426 [2024-11-29 09:46:42.098822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:14.426 [2024-11-29 09:46:42.098837] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:14.426 [2024-11-29 09:46:42.098846] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f4181b08-3dfb-4e83-b6f4-aa68fb5795b9 00:29:14.426 [2024-11-29 09:46:42.098854] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:14.426 [2024-11-29 09:46:42.098863] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:14.426 [2024-11-29 09:46:42.098869] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:14.426 [2024-11-29 09:46:42.098880] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:14.426 [2024-11-29 09:46:42.098887] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:14.426 [2024-11-29 09:46:42.098896] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:14.426 [2024-11-29 09:46:42.098903] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:14.426 [2024-11-29 09:46:42.098910] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:14.426 [2024-11-29 09:46:42.098916] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:14.426 [2024-11-29 09:46:42.098925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.426 [2024-11-29 09:46:42.098932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:14.426 [2024-11-29 09:46:42.098942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.950 ms 00:29:14.426 [2024-11-29 09:46:42.098949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.426 [2024-11-29 09:46:42.100372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.426 [2024-11-29 09:46:42.100398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:14.426 [2024-11-29 09:46:42.100411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.402 ms 00:29:14.426 [2024-11-29 09:46:42.100418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.426 [2024-11-29 09:46:42.100493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.426 [2024-11-29 09:46:42.100501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:14.426 [2024-11-29 09:46:42.100511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:29:14.426 [2024-11-29 09:46:42.100519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.426 [2024-11-29 09:46:42.105715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:14.426 [2024-11-29 09:46:42.105752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:14.426 [2024-11-29 09:46:42.105764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:14.426 [2024-11-29 09:46:42.105771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.426 [2024-11-29 09:46:42.105827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:14.426 [2024-11-29 09:46:42.105835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:14.426 [2024-11-29 09:46:42.105844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:14.426 [2024-11-29 09:46:42.105852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.426 [2024-11-29 09:46:42.105905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:14.426 [2024-11-29 09:46:42.105914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:14.426 [2024-11-29 09:46:42.105926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:14.426 [2024-11-29 09:46:42.105933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.426 [2024-11-29 09:46:42.105951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:14.426 [2024-11-29 09:46:42.105964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:14.426 [2024-11-29 09:46:42.105973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:14.426 [2024-11-29 09:46:42.105980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.426 [2024-11-29 09:46:42.115057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:14.426 [2024-11-29 09:46:42.115105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:14.426 [2024-11-29 09:46:42.115117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:14.426 [2024-11-29 09:46:42.115126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.426 [2024-11-29 09:46:42.122483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:14.426 [2024-11-29 09:46:42.122528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:14.426 [2024-11-29 09:46:42.122540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:14.426 [2024-11-29 09:46:42.122548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.426 [2024-11-29 09:46:42.122677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:14.426 [2024-11-29 09:46:42.122693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:14.426 [2024-11-29 09:46:42.122703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:14.426 [2024-11-29 09:46:42.122713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.426 [2024-11-29 09:46:42.122748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:14.426 [2024-11-29 09:46:42.122756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:14.426 [2024-11-29 09:46:42.122765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:14.426 [2024-11-29 09:46:42.122773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.426 [2024-11-29 09:46:42.122837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:14.426 [2024-11-29 09:46:42.122846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:14.426 [2024-11-29 09:46:42.122855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:14.426 [2024-11-29 09:46:42.122862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.426 [2024-11-29 09:46:42.122893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:14.426 [2024-11-29 09:46:42.122902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:14.426 [2024-11-29 09:46:42.122916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:14.426 [2024-11-29 09:46:42.122923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.426 [2024-11-29 09:46:42.122960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:14.426 [2024-11-29 09:46:42.122968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:14.426 [2024-11-29 09:46:42.122978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:14.426 [2024-11-29 09:46:42.122985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.426 [2024-11-29 09:46:42.123030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:14.426 [2024-11-29 09:46:42.123040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:14.426 [2024-11-29 09:46:42.123048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:14.426 [2024-11-29 09:46:42.123056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.426 [2024-11-29 09:46:42.123181] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 46.461 ms, result 0 00:29:14.426 true 00:29:14.684 09:46:42 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 96254 00:29:14.684 09:46:42 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 96254 ']' 00:29:14.684 09:46:42 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 96254 00:29:14.684 09:46:42 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # uname 00:29:14.684 09:46:42 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:14.684 09:46:42 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96254 00:29:14.684 killing process with pid 96254 00:29:14.684 09:46:42 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:14.684 09:46:42 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:14.684 09:46:42 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96254' 00:29:14.684 09:46:42 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # kill 96254 00:29:14.684 09:46:42 ftl.ftl_restore_fast -- common/autotest_common.sh@978 -- # wait 96254 00:29:24.657 09:46:52 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:29:28.848 262144+0 records in 00:29:28.848 262144+0 records out 00:29:28.848 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.16762 s, 258 MB/s 00:29:28.848 09:46:56 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:30.748 09:46:58 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:30.748 [2024-11-29 09:46:58.059910] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:29:30.748 [2024-11-29 09:46:58.060009] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96442 ] 00:29:30.748 [2024-11-29 09:46:58.187515] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:30.748 [2024-11-29 09:46:58.216333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:30.748 [2024-11-29 09:46:58.236060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:30.748 [2024-11-29 09:46:58.322219] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:30.748 [2024-11-29 09:46:58.322288] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:31.008 [2024-11-29 09:46:58.474760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.008 [2024-11-29 09:46:58.474823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:31.008 [2024-11-29 09:46:58.474836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:31.008 [2024-11-29 09:46:58.474845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.008 [2024-11-29 09:46:58.474902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.008 [2024-11-29 09:46:58.474912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:31.008 [2024-11-29 09:46:58.474920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:29:31.008 [2024-11-29 09:46:58.474930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.008 [2024-11-29 09:46:58.474948] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:31.008 [2024-11-29 09:46:58.475273] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:31.008 [2024-11-29 09:46:58.475290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.008 [2024-11-29 09:46:58.475300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:31.008 [2024-11-29 09:46:58.475308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.346 ms 00:29:31.008 [2024-11-29 09:46:58.475316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.008 [2024-11-29 09:46:58.476398] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:31.008 [2024-11-29 09:46:58.478493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.008 [2024-11-29 09:46:58.478535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:31.008 [2024-11-29 09:46:58.478551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.096 ms 00:29:31.008 [2024-11-29 09:46:58.478559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.008 [2024-11-29 09:46:58.478631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.008 [2024-11-29 09:46:58.478644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:31.008 [2024-11-29 09:46:58.478653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:29:31.008 [2024-11-29 09:46:58.478661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.008 [2024-11-29 09:46:58.483352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.008 [2024-11-29 09:46:58.483391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:31.008 [2024-11-29 09:46:58.483406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.630 ms 00:29:31.008 [2024-11-29 09:46:58.483414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.008 [2024-11-29 09:46:58.483503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.008 [2024-11-29 09:46:58.483513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:31.008 [2024-11-29 09:46:58.483521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:29:31.008 [2024-11-29 09:46:58.483531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.008 [2024-11-29 09:46:58.483578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.008 [2024-11-29 09:46:58.483599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:31.008 [2024-11-29 09:46:58.483614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:31.008 [2024-11-29 09:46:58.483621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.008 [2024-11-29 09:46:58.483647] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:31.008 [2024-11-29 09:46:58.484949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.008 [2024-11-29 09:46:58.484985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:31.008 [2024-11-29 09:46:58.484997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.311 ms 00:29:31.008 [2024-11-29 09:46:58.485008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.008 [2024-11-29 09:46:58.485038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.008 [2024-11-29 09:46:58.485046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:31.008 [2024-11-29 09:46:58.485058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:29:31.008 [2024-11-29 09:46:58.485066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.008 [2024-11-29 09:46:58.485101] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:31.008 [2024-11-29 09:46:58.485121] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:31.008 [2024-11-29 09:46:58.485160] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:31.008 [2024-11-29 09:46:58.485176] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:31.008 [2024-11-29 09:46:58.485281] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:31.008 [2024-11-29 09:46:58.485301] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:31.009 [2024-11-29 09:46:58.485313] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:31.009 [2024-11-29 09:46:58.485324] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:31.009 [2024-11-29 09:46:58.485333] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:31.009 [2024-11-29 09:46:58.485340] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:31.009 [2024-11-29 09:46:58.485348] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:31.009 [2024-11-29 09:46:58.485355] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:31.009 [2024-11-29 09:46:58.485362] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:31.009 [2024-11-29 09:46:58.485370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.009 [2024-11-29 09:46:58.485385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:31.009 [2024-11-29 09:46:58.485394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:29:31.009 [2024-11-29 09:46:58.485405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.009 [2024-11-29 09:46:58.485490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.009 [2024-11-29 09:46:58.485535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:31.009 [2024-11-29 09:46:58.485543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:29:31.009 [2024-11-29 09:46:58.485551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.009 [2024-11-29 09:46:58.485666] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:31.009 [2024-11-29 09:46:58.485683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:31.009 [2024-11-29 09:46:58.485692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:31.009 [2024-11-29 09:46:58.485700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:31.009 [2024-11-29 09:46:58.485714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:31.009 [2024-11-29 09:46:58.485721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:31.009 [2024-11-29 09:46:58.485733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:31.009 [2024-11-29 09:46:58.485740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:31.009 [2024-11-29 09:46:58.485746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:31.009 [2024-11-29 09:46:58.485752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:31.009 [2024-11-29 09:46:58.485761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:31.009 [2024-11-29 09:46:58.485768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:31.009 [2024-11-29 09:46:58.485778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:31.009 [2024-11-29 09:46:58.485785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:31.009 [2024-11-29 09:46:58.485793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:31.009 [2024-11-29 09:46:58.485800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:31.009 [2024-11-29 09:46:58.485806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:31.009 [2024-11-29 09:46:58.485813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:31.009 [2024-11-29 09:46:58.485819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:31.009 [2024-11-29 09:46:58.485826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:31.009 [2024-11-29 09:46:58.485832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:31.009 [2024-11-29 09:46:58.485838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:31.009 [2024-11-29 09:46:58.485845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:31.009 [2024-11-29 09:46:58.485851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:31.009 [2024-11-29 09:46:58.485858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:31.009 [2024-11-29 09:46:58.485864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:31.009 [2024-11-29 09:46:58.485876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:31.009 [2024-11-29 09:46:58.485884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:31.009 [2024-11-29 09:46:58.485890] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:31.009 [2024-11-29 09:46:58.485896] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:31.009 [2024-11-29 09:46:58.485903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:31.009 [2024-11-29 09:46:58.485909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:31.009 [2024-11-29 09:46:58.485916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:31.009 [2024-11-29 09:46:58.485922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:31.009 [2024-11-29 09:46:58.485928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:31.009 [2024-11-29 09:46:58.485934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:31.009 [2024-11-29 09:46:58.485940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:31.009 [2024-11-29 09:46:58.485947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:31.009 [2024-11-29 09:46:58.485954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:31.009 [2024-11-29 09:46:58.485961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:31.009 [2024-11-29 09:46:58.485967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:31.009 [2024-11-29 09:46:58.485973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:31.009 [2024-11-29 09:46:58.485981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:31.009 [2024-11-29 09:46:58.485988] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:31.009 [2024-11-29 09:46:58.485996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:31.009 [2024-11-29 09:46:58.486003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:31.009 [2024-11-29 09:46:58.486011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:31.009 [2024-11-29 09:46:58.486018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:31.009 [2024-11-29 09:46:58.486025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:31.009 [2024-11-29 09:46:58.486032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:31.009 [2024-11-29 09:46:58.486039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:31.009 [2024-11-29 09:46:58.486044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:31.009 [2024-11-29 09:46:58.486051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:31.009 [2024-11-29 09:46:58.486059] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:31.009 [2024-11-29 09:46:58.486067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:31.009 [2024-11-29 09:46:58.486075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:31.009 [2024-11-29 09:46:58.486082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:31.009 [2024-11-29 09:46:58.486088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:31.009 [2024-11-29 09:46:58.486097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:31.009 [2024-11-29 09:46:58.486105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:31.009 [2024-11-29 09:46:58.486111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:31.009 [2024-11-29 09:46:58.486118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:31.009 [2024-11-29 09:46:58.486125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:31.009 [2024-11-29 09:46:58.486132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:31.009 [2024-11-29 09:46:58.486139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:31.009 [2024-11-29 09:46:58.486145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:31.010 [2024-11-29 09:46:58.486152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:31.010 [2024-11-29 09:46:58.486159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:31.010 [2024-11-29 09:46:58.486166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:31.010 [2024-11-29 09:46:58.486173] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:31.010 [2024-11-29 09:46:58.486180] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:31.010 [2024-11-29 09:46:58.486188] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:31.010 [2024-11-29 09:46:58.486195] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:31.010 [2024-11-29 09:46:58.486202] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:31.010 [2024-11-29 09:46:58.486210] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:31.010 [2024-11-29 09:46:58.486218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.010 [2024-11-29 09:46:58.486228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:31.010 [2024-11-29 09:46:58.486235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.636 ms 00:29:31.010 [2024-11-29 09:46:58.486246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.010 [2024-11-29 09:46:58.494801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.010 [2024-11-29 09:46:58.494854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:31.010 [2024-11-29 09:46:58.494866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.508 ms 00:29:31.010 [2024-11-29 09:46:58.494877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.010 [2024-11-29 09:46:58.494980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.010 [2024-11-29 09:46:58.494996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:31.010 [2024-11-29 09:46:58.495007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:29:31.010 [2024-11-29 09:46:58.495015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.010 [2024-11-29 09:46:58.514512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.010 [2024-11-29 09:46:58.514611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:31.010 [2024-11-29 09:46:58.514631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.425 ms 00:29:31.010 [2024-11-29 09:46:58.514650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.010 [2024-11-29 09:46:58.514750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.010 [2024-11-29 09:46:58.514766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:31.010 [2024-11-29 09:46:58.514780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:31.010 [2024-11-29 09:46:58.514800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.010 [2024-11-29 09:46:58.515241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.010 [2024-11-29 09:46:58.515284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:31.010 [2024-11-29 09:46:58.515299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:29:31.010 [2024-11-29 09:46:58.515311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.010 [2024-11-29 09:46:58.515498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.010 [2024-11-29 09:46:58.515524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:31.010 [2024-11-29 09:46:58.515537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:29:31.010 [2024-11-29 09:46:58.515554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.010 [2024-11-29 09:46:58.521807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.010 [2024-11-29 09:46:58.521860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:31.010 [2024-11-29 09:46:58.521876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.203 ms 00:29:31.010 [2024-11-29 09:46:58.521894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.010 [2024-11-29 09:46:58.524501] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:29:31.010 [2024-11-29 09:46:58.524554] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:31.010 [2024-11-29 09:46:58.524571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.010 [2024-11-29 09:46:58.524599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:31.010 [2024-11-29 09:46:58.524612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.558 ms 00:29:31.010 [2024-11-29 09:46:58.524623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.010 [2024-11-29 09:46:58.539559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.010 [2024-11-29 09:46:58.539626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:31.010 [2024-11-29 09:46:58.539640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.865 ms 00:29:31.010 [2024-11-29 09:46:58.539648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.010 [2024-11-29 09:46:58.541761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.010 [2024-11-29 09:46:58.541795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:31.010 [2024-11-29 09:46:58.541807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.045 ms 00:29:31.010 [2024-11-29 09:46:58.541814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.010 [2024-11-29 09:46:58.543037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.010 [2024-11-29 09:46:58.543066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:31.010 [2024-11-29 09:46:58.543075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.188 ms 00:29:31.010 [2024-11-29 09:46:58.543081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.010 [2024-11-29 09:46:58.543442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.010 [2024-11-29 09:46:58.543465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:31.010 [2024-11-29 09:46:58.543475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:29:31.010 [2024-11-29 09:46:58.543482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.010 [2024-11-29 09:46:58.558488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.010 [2024-11-29 09:46:58.558550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:31.010 [2024-11-29 09:46:58.558564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.983 ms 00:29:31.010 [2024-11-29 09:46:58.558573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.010 [2024-11-29 09:46:58.566346] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:31.010 [2024-11-29 09:46:58.569218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.010 [2024-11-29 09:46:58.569300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:31.010 [2024-11-29 09:46:58.569317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.545 ms 00:29:31.010 [2024-11-29 09:46:58.569331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.010 [2024-11-29 09:46:58.569421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.010 [2024-11-29 09:46:58.569432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:31.010 [2024-11-29 09:46:58.569441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:29:31.010 [2024-11-29 09:46:58.569448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.010 [2024-11-29 09:46:58.569539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.010 [2024-11-29 09:46:58.569550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:31.010 [2024-11-29 09:46:58.569561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:29:31.010 [2024-11-29 09:46:58.569568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.010 [2024-11-29 09:46:58.569614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.010 [2024-11-29 09:46:58.569624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:31.011 [2024-11-29 09:46:58.569632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:31.011 [2024-11-29 09:46:58.569643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.011 [2024-11-29 09:46:58.569671] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:31.011 [2024-11-29 09:46:58.569685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.011 [2024-11-29 09:46:58.569698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:31.011 [2024-11-29 09:46:58.569705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:29:31.011 [2024-11-29 09:46:58.569715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.011 [2024-11-29 09:46:58.572952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.011 [2024-11-29 09:46:58.572995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:31.011 [2024-11-29 09:46:58.573007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.218 ms 00:29:31.011 [2024-11-29 09:46:58.573016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.011 [2024-11-29 09:46:58.573092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.011 [2024-11-29 09:46:58.573102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:31.011 [2024-11-29 09:46:58.573111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:29:31.011 [2024-11-29 09:46:58.573125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.011 [2024-11-29 09:46:58.574089] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 98.931 ms, result 0 00:29:31.946  [2024-11-29T09:47:00.609Z] Copying: 45/1024 [MB] (45 MBps) [2024-11-29T09:47:01.986Z] Copying: 85/1024 [MB] (40 MBps) [2024-11-29T09:47:02.922Z] Copying: 130/1024 [MB] (44 MBps) [2024-11-29T09:47:03.859Z] Copying: 165/1024 [MB] (35 MBps) [2024-11-29T09:47:04.794Z] Copying: 202/1024 [MB] (36 MBps) [2024-11-29T09:47:05.727Z] Copying: 247/1024 [MB] (45 MBps) [2024-11-29T09:47:06.667Z] Copying: 292/1024 [MB] (45 MBps) [2024-11-29T09:47:07.602Z] Copying: 318/1024 [MB] (25 MBps) [2024-11-29T09:47:08.978Z] Copying: 369/1024 [MB] (51 MBps) [2024-11-29T09:47:09.920Z] Copying: 424/1024 [MB] (55 MBps) [2024-11-29T09:47:10.864Z] Copying: 464/1024 [MB] (39 MBps) [2024-11-29T09:47:11.807Z] Copying: 479/1024 [MB] (15 MBps) [2024-11-29T09:47:12.751Z] Copying: 498/1024 [MB] (19 MBps) [2024-11-29T09:47:13.708Z] Copying: 513/1024 [MB] (15 MBps) [2024-11-29T09:47:14.667Z] Copying: 535/1024 [MB] (21 MBps) [2024-11-29T09:47:15.609Z] Copying: 557/1024 [MB] (21 MBps) [2024-11-29T09:47:16.996Z] Copying: 573/1024 [MB] (16 MBps) [2024-11-29T09:47:17.942Z] Copying: 596/1024 [MB] (22 MBps) [2024-11-29T09:47:18.889Z] Copying: 619/1024 [MB] (22 MBps) [2024-11-29T09:47:19.831Z] Copying: 637/1024 [MB] (18 MBps) [2024-11-29T09:47:20.775Z] Copying: 655/1024 [MB] (17 MBps) [2024-11-29T09:47:21.718Z] Copying: 672/1024 [MB] (16 MBps) [2024-11-29T09:47:22.696Z] Copying: 686/1024 [MB] (14 MBps) [2024-11-29T09:47:23.640Z] Copying: 700/1024 [MB] (13 MBps) [2024-11-29T09:47:25.028Z] Copying: 710/1024 [MB] (10 MBps) [2024-11-29T09:47:25.605Z] Copying: 736552/1048576 [kB] (9176 kBps) [2024-11-29T09:47:26.993Z] Copying: 746532/1048576 [kB] (9980 kBps) [2024-11-29T09:47:27.936Z] Copying: 755860/1048576 [kB] (9328 kBps) [2024-11-29T09:47:28.880Z] Copying: 765088/1048576 [kB] (9228 kBps) [2024-11-29T09:47:29.822Z] Copying: 773960/1048576 [kB] (8872 kBps) [2024-11-29T09:47:30.767Z] Copying: 783000/1048576 [kB] (9040 kBps) [2024-11-29T09:47:31.709Z] Copying: 792512/1048576 [kB] (9512 kBps) [2024-11-29T09:47:32.678Z] Copying: 802272/1048576 [kB] (9760 kBps) [2024-11-29T09:47:33.620Z] Copying: 812272/1048576 [kB] (10000 kBps) [2024-11-29T09:47:35.008Z] Copying: 821804/1048576 [kB] (9532 kBps) [2024-11-29T09:47:35.596Z] Copying: 831484/1048576 [kB] (9680 kBps) [2024-11-29T09:47:36.982Z] Copying: 840624/1048576 [kB] (9140 kBps) [2024-11-29T09:47:37.925Z] Copying: 850040/1048576 [kB] (9416 kBps) [2024-11-29T09:47:38.868Z] Copying: 859644/1048576 [kB] (9604 kBps) [2024-11-29T09:47:39.815Z] Copying: 868992/1048576 [kB] (9348 kBps) [2024-11-29T09:47:40.759Z] Copying: 879080/1048576 [kB] (10088 kBps) [2024-11-29T09:47:41.707Z] Copying: 869/1024 [MB] (10 MBps) [2024-11-29T09:47:42.648Z] Copying: 879/1024 [MB] (10 MBps) [2024-11-29T09:47:44.030Z] Copying: 889/1024 [MB] (10 MBps) [2024-11-29T09:47:44.602Z] Copying: 900/1024 [MB] (10 MBps) [2024-11-29T09:47:45.991Z] Copying: 932108/1048576 [kB] (10108 kBps) [2024-11-29T09:47:46.935Z] Copying: 942248/1048576 [kB] (10140 kBps) [2024-11-29T09:47:47.876Z] Copying: 952080/1048576 [kB] (9832 kBps) [2024-11-29T09:47:48.819Z] Copying: 940/1024 [MB] (10 MBps) [2024-11-29T09:47:49.763Z] Copying: 950/1024 [MB] (10 MBps) [2024-11-29T09:47:50.705Z] Copying: 960/1024 [MB] (10 MBps) [2024-11-29T09:47:51.649Z] Copying: 970/1024 [MB] (10 MBps) [2024-11-29T09:47:52.589Z] Copying: 981/1024 [MB] (10 MBps) [2024-11-29T09:47:53.974Z] Copying: 991/1024 [MB] (10 MBps) [2024-11-29T09:47:54.916Z] Copying: 1002/1024 [MB] (10 MBps) [2024-11-29T09:47:55.489Z] Copying: 1012/1024 [MB] (10 MBps) [2024-11-29T09:47:55.489Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-11-29 09:47:55.447034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.763 [2024-11-29 09:47:55.447081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:27.763 [2024-11-29 09:47:55.447098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:27.763 [2024-11-29 09:47:55.447109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.764 [2024-11-29 09:47:55.447130] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:27.764 [2024-11-29 09:47:55.447576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.764 [2024-11-29 09:47:55.447614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:27.764 [2024-11-29 09:47:55.447624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.430 ms 00:30:27.764 [2024-11-29 09:47:55.447631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.764 [2024-11-29 09:47:55.449269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.764 [2024-11-29 09:47:55.449301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:27.764 [2024-11-29 09:47:55.449310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.619 ms 00:30:27.764 [2024-11-29 09:47:55.449322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.764 [2024-11-29 09:47:55.449366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.764 [2024-11-29 09:47:55.449375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:30:27.764 [2024-11-29 09:47:55.449383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:27.764 [2024-11-29 09:47:55.449390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.764 [2024-11-29 09:47:55.449433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.764 [2024-11-29 09:47:55.449441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:30:27.764 [2024-11-29 09:47:55.449448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:30:27.764 [2024-11-29 09:47:55.449455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.764 [2024-11-29 09:47:55.449470] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:27.764 [2024-11-29 09:47:55.449481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:27.764 [2024-11-29 09:47:55.449490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:27.764 [2024-11-29 09:47:55.449498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:27.764 [2024-11-29 09:47:55.449505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:27.764 [2024-11-29 09:47:55.449512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:27.764 [2024-11-29 09:47:55.449519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:27.764 [2024-11-29 09:47:55.449526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:27.764 [2024-11-29 09:47:55.449533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:27.764 [2024-11-29 09:47:55.449540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:27.764 [2024-11-29 09:47:55.449547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:27.764 [2024-11-29 09:47:55.449554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:27.764 [2024-11-29 09:47:55.449561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:27.764 [2024-11-29 09:47:55.449568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:27.764 [2024-11-29 09:47:55.449582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:27.764 [2024-11-29 09:47:55.449603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:27.764 [2024-11-29 09:47:55.449611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:27.764 [2024-11-29 09:47:55.449618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:27.764 [2024-11-29 09:47:55.449625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:27.764 [2024-11-29 09:47:55.449633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:27.764 [2024-11-29 09:47:55.449640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:27.764 [2024-11-29 09:47:55.449647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:27.764 [2024-11-29 09:47:55.449654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:27.764 [2024-11-29 09:47:55.449662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:27.764 [2024-11-29 09:47:55.449669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:27.764 [2024-11-29 09:47:55.449677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:27.764 [2024-11-29 09:47:55.449688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:27.764 [2024-11-29 09:47:55.449696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:27.764 [2024-11-29 09:47:55.449703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:27.764 [2024-11-29 09:47:55.449710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:27.764 [2024-11-29 09:47:55.449718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:27.764 [2024-11-29 09:47:55.449728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:27.764 [2024-11-29 09:47:55.449736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:27.764 [2024-11-29 09:47:55.449743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:27.764 [2024-11-29 09:47:55.449753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:27.764 [2024-11-29 09:47:55.449760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:27.764 [2024-11-29 09:47:55.449767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:27.764 [2024-11-29 09:47:55.449775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:27.764 [2024-11-29 09:47:55.449782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:27.764 [2024-11-29 09:47:55.449789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:27.764 [2024-11-29 09:47:55.449796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:27.764 [2024-11-29 09:47:55.449803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:27.764 [2024-11-29 09:47:55.449810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:27.764 [2024-11-29 09:47:55.449817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:27.764 [2024-11-29 09:47:55.449824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:27.764 [2024-11-29 09:47:55.449831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:27.764 [2024-11-29 09:47:55.449838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:27.764 [2024-11-29 09:47:55.449846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:27.764 [2024-11-29 09:47:55.449857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:27.764 [2024-11-29 09:47:55.449865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:27.764 [2024-11-29 09:47:55.449872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:27.765 [2024-11-29 09:47:55.449879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:27.765 [2024-11-29 09:47:55.449889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:27.765 [2024-11-29 09:47:55.449897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:27.765 [2024-11-29 09:47:55.449904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:27.765 [2024-11-29 09:47:55.449912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:27.765 [2024-11-29 09:47:55.449922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:27.765 [2024-11-29 09:47:55.449930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:27.765 [2024-11-29 09:47:55.449937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:27.765 [2024-11-29 09:47:55.449944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:27.765 [2024-11-29 09:47:55.449951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:27.765 [2024-11-29 09:47:55.449958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:27.765 [2024-11-29 09:47:55.449965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:27.765 [2024-11-29 09:47:55.449973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:27.765 [2024-11-29 09:47:55.449980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:27.765 [2024-11-29 09:47:55.449987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:27.765 [2024-11-29 09:47:55.449994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:27.765 [2024-11-29 09:47:55.450002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:27.765 [2024-11-29 09:47:55.450009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:27.765 [2024-11-29 09:47:55.450019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:27.765 [2024-11-29 09:47:55.450027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:27.765 [2024-11-29 09:47:55.450034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:27.765 [2024-11-29 09:47:55.450041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:27.765 [2024-11-29 09:47:55.450048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:27.765 [2024-11-29 09:47:55.450056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:27.765 [2024-11-29 09:47:55.450066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:27.765 [2024-11-29 09:47:55.450073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:27.765 [2024-11-29 09:47:55.450080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:27.765 [2024-11-29 09:47:55.450087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:27.765 [2024-11-29 09:47:55.450095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:27.765 [2024-11-29 09:47:55.450102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:27.765 [2024-11-29 09:47:55.450109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:27.765 [2024-11-29 09:47:55.450116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:27.765 [2024-11-29 09:47:55.450123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:27.765 [2024-11-29 09:47:55.450130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:27.765 [2024-11-29 09:47:55.450138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:27.765 [2024-11-29 09:47:55.450145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:27.765 [2024-11-29 09:47:55.450156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:27.765 [2024-11-29 09:47:55.450164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:27.765 [2024-11-29 09:47:55.450171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:27.765 [2024-11-29 09:47:55.450178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:27.765 [2024-11-29 09:47:55.450185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:27.765 [2024-11-29 09:47:55.450192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:27.765 [2024-11-29 09:47:55.450199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:27.765 [2024-11-29 09:47:55.450206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:27.765 [2024-11-29 09:47:55.450214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:27.765 [2024-11-29 09:47:55.450221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:27.765 [2024-11-29 09:47:55.450232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:27.765 [2024-11-29 09:47:55.450239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:27.765 [2024-11-29 09:47:55.450246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:27.765 [2024-11-29 09:47:55.450253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:27.765 [2024-11-29 09:47:55.450268] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:27.765 [2024-11-29 09:47:55.450275] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f4181b08-3dfb-4e83-b6f4-aa68fb5795b9 00:30:27.765 [2024-11-29 09:47:55.450283] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:27.765 [2024-11-29 09:47:55.450294] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:30:27.765 [2024-11-29 09:47:55.450300] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:27.765 [2024-11-29 09:47:55.450308] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:27.765 [2024-11-29 09:47:55.450314] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:27.765 [2024-11-29 09:47:55.450322] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:27.765 [2024-11-29 09:47:55.450329] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:27.765 [2024-11-29 09:47:55.450336] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:27.765 [2024-11-29 09:47:55.450343] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:27.765 [2024-11-29 09:47:55.450350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.765 [2024-11-29 09:47:55.450357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:27.765 [2024-11-29 09:47:55.450366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.880 ms 00:30:27.765 [2024-11-29 09:47:55.450373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.765 [2024-11-29 09:47:55.451786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.765 [2024-11-29 09:47:55.451817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:27.765 [2024-11-29 09:47:55.451831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.395 ms 00:30:27.765 [2024-11-29 09:47:55.451838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.765 [2024-11-29 09:47:55.451913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.765 [2024-11-29 09:47:55.451925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:27.765 [2024-11-29 09:47:55.451933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:30:27.765 [2024-11-29 09:47:55.451939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.765 [2024-11-29 09:47:55.456739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:27.765 [2024-11-29 09:47:55.456767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:27.765 [2024-11-29 09:47:55.456776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:27.766 [2024-11-29 09:47:55.456783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.766 [2024-11-29 09:47:55.456835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:27.766 [2024-11-29 09:47:55.456843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:27.766 [2024-11-29 09:47:55.456851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:27.766 [2024-11-29 09:47:55.456858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.766 [2024-11-29 09:47:55.456884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:27.766 [2024-11-29 09:47:55.456893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:27.766 [2024-11-29 09:47:55.456900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:27.766 [2024-11-29 09:47:55.456907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.766 [2024-11-29 09:47:55.456921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:27.766 [2024-11-29 09:47:55.456928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:27.766 [2024-11-29 09:47:55.456942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:27.766 [2024-11-29 09:47:55.456948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.766 [2024-11-29 09:47:55.465642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:27.766 [2024-11-29 09:47:55.465677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:27.766 [2024-11-29 09:47:55.465694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:27.766 [2024-11-29 09:47:55.465702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.766 [2024-11-29 09:47:55.472639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:27.766 [2024-11-29 09:47:55.472687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:27.766 [2024-11-29 09:47:55.472698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:27.766 [2024-11-29 09:47:55.472706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.766 [2024-11-29 09:47:55.472751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:27.766 [2024-11-29 09:47:55.472759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:27.766 [2024-11-29 09:47:55.472767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:27.766 [2024-11-29 09:47:55.472775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.766 [2024-11-29 09:47:55.472798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:27.766 [2024-11-29 09:47:55.472806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:27.766 [2024-11-29 09:47:55.472813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:27.766 [2024-11-29 09:47:55.472823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.766 [2024-11-29 09:47:55.472871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:27.766 [2024-11-29 09:47:55.472880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:27.766 [2024-11-29 09:47:55.472887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:27.766 [2024-11-29 09:47:55.472894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.766 [2024-11-29 09:47:55.472916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:27.766 [2024-11-29 09:47:55.472925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:27.766 [2024-11-29 09:47:55.472932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:27.766 [2024-11-29 09:47:55.472942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.766 [2024-11-29 09:47:55.472977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:27.766 [2024-11-29 09:47:55.472986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:27.766 [2024-11-29 09:47:55.472993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:27.766 [2024-11-29 09:47:55.473000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.766 [2024-11-29 09:47:55.473039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:27.766 [2024-11-29 09:47:55.473048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:27.766 [2024-11-29 09:47:55.473056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:27.766 [2024-11-29 09:47:55.473066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.766 [2024-11-29 09:47:55.473178] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 26.120 ms, result 0 00:30:28.337 00:30:28.337 00:30:28.337 09:47:55 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:30:28.337 [2024-11-29 09:47:55.974964] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:30:28.337 [2024-11-29 09:47:55.975093] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97028 ] 00:30:28.597 [2024-11-29 09:47:56.107036] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:28.597 [2024-11-29 09:47:56.138086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:28.597 [2024-11-29 09:47:56.157432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:28.597 [2024-11-29 09:47:56.246243] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:28.597 [2024-11-29 09:47:56.246314] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:28.859 [2024-11-29 09:47:56.405287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.859 [2024-11-29 09:47:56.405332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:28.859 [2024-11-29 09:47:56.405356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:28.859 [2024-11-29 09:47:56.405368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.859 [2024-11-29 09:47:56.405422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.859 [2024-11-29 09:47:56.405432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:28.859 [2024-11-29 09:47:56.405440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:30:28.859 [2024-11-29 09:47:56.405450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.859 [2024-11-29 09:47:56.405469] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:28.859 [2024-11-29 09:47:56.405718] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:28.859 [2024-11-29 09:47:56.405734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.859 [2024-11-29 09:47:56.405744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:28.859 [2024-11-29 09:47:56.405752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:30:28.859 [2024-11-29 09:47:56.405759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.859 [2024-11-29 09:47:56.405995] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:30:28.859 [2024-11-29 09:47:56.406017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.859 [2024-11-29 09:47:56.406024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:28.859 [2024-11-29 09:47:56.406036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:30:28.859 [2024-11-29 09:47:56.406045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.859 [2024-11-29 09:47:56.406090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.859 [2024-11-29 09:47:56.406102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:28.860 [2024-11-29 09:47:56.406113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:30:28.860 [2024-11-29 09:47:56.406120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.860 [2024-11-29 09:47:56.406351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.860 [2024-11-29 09:47:56.406361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:28.860 [2024-11-29 09:47:56.406369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:30:28.860 [2024-11-29 09:47:56.406379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.860 [2024-11-29 09:47:56.406485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.860 [2024-11-29 09:47:56.406495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:28.860 [2024-11-29 09:47:56.406503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:30:28.860 [2024-11-29 09:47:56.406509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.860 [2024-11-29 09:47:56.406533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.860 [2024-11-29 09:47:56.406541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:28.860 [2024-11-29 09:47:56.406548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:28.860 [2024-11-29 09:47:56.406555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.860 [2024-11-29 09:47:56.406574] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:28.860 [2024-11-29 09:47:56.407994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.860 [2024-11-29 09:47:56.408020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:28.860 [2024-11-29 09:47:56.408030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.423 ms 00:30:28.860 [2024-11-29 09:47:56.408038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.860 [2024-11-29 09:47:56.408070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.860 [2024-11-29 09:47:56.408079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:28.860 [2024-11-29 09:47:56.408088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:30:28.860 [2024-11-29 09:47:56.408096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.860 [2024-11-29 09:47:56.408116] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:28.860 [2024-11-29 09:47:56.408134] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:28.860 [2024-11-29 09:47:56.408170] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:28.860 [2024-11-29 09:47:56.408190] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:28.860 [2024-11-29 09:47:56.408295] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:28.860 [2024-11-29 09:47:56.408310] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:28.860 [2024-11-29 09:47:56.408321] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:28.860 [2024-11-29 09:47:56.408337] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:28.860 [2024-11-29 09:47:56.408346] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:28.860 [2024-11-29 09:47:56.408355] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:28.860 [2024-11-29 09:47:56.408363] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:28.860 [2024-11-29 09:47:56.408371] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:28.860 [2024-11-29 09:47:56.408379] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:28.860 [2024-11-29 09:47:56.408387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.860 [2024-11-29 09:47:56.408398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:28.860 [2024-11-29 09:47:56.408407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:30:28.860 [2024-11-29 09:47:56.408414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.860 [2024-11-29 09:47:56.408497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.860 [2024-11-29 09:47:56.408507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:28.860 [2024-11-29 09:47:56.408516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:30:28.860 [2024-11-29 09:47:56.408523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.860 [2024-11-29 09:47:56.408641] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:28.860 [2024-11-29 09:47:56.408655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:28.860 [2024-11-29 09:47:56.408664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:28.860 [2024-11-29 09:47:56.408682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:28.860 [2024-11-29 09:47:56.408692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:28.860 [2024-11-29 09:47:56.408700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:28.860 [2024-11-29 09:47:56.408707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:28.860 [2024-11-29 09:47:56.408715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:28.860 [2024-11-29 09:47:56.408728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:28.860 [2024-11-29 09:47:56.408735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:28.860 [2024-11-29 09:47:56.408743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:28.860 [2024-11-29 09:47:56.408750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:28.860 [2024-11-29 09:47:56.408759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:28.860 [2024-11-29 09:47:56.408768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:28.860 [2024-11-29 09:47:56.408775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:28.860 [2024-11-29 09:47:56.408783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:28.860 [2024-11-29 09:47:56.408790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:28.860 [2024-11-29 09:47:56.408800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:28.860 [2024-11-29 09:47:56.408811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:28.860 [2024-11-29 09:47:56.408819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:28.860 [2024-11-29 09:47:56.408830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:28.860 [2024-11-29 09:47:56.408837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:28.860 [2024-11-29 09:47:56.408848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:28.860 [2024-11-29 09:47:56.408856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:28.860 [2024-11-29 09:47:56.408866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:28.860 [2024-11-29 09:47:56.408874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:28.860 [2024-11-29 09:47:56.408881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:28.860 [2024-11-29 09:47:56.408888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:28.860 [2024-11-29 09:47:56.408896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:28.860 [2024-11-29 09:47:56.408904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:28.860 [2024-11-29 09:47:56.408911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:28.860 [2024-11-29 09:47:56.408918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:28.860 [2024-11-29 09:47:56.408925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:28.860 [2024-11-29 09:47:56.408939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:28.860 [2024-11-29 09:47:56.408946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:28.860 [2024-11-29 09:47:56.408953] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:28.860 [2024-11-29 09:47:56.408960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:28.860 [2024-11-29 09:47:56.408967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:28.860 [2024-11-29 09:47:56.408975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:28.860 [2024-11-29 09:47:56.408982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:28.860 [2024-11-29 09:47:56.408989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:28.860 [2024-11-29 09:47:56.408996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:28.860 [2024-11-29 09:47:56.409003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:28.860 [2024-11-29 09:47:56.409010] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:28.860 [2024-11-29 09:47:56.409019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:28.860 [2024-11-29 09:47:56.409029] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:28.860 [2024-11-29 09:47:56.409037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:28.860 [2024-11-29 09:47:56.409049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:28.860 [2024-11-29 09:47:56.409056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:28.860 [2024-11-29 09:47:56.409065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:28.860 [2024-11-29 09:47:56.409073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:28.860 [2024-11-29 09:47:56.409080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:28.860 [2024-11-29 09:47:56.409088] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:28.860 [2024-11-29 09:47:56.409097] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:28.860 [2024-11-29 09:47:56.409106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:28.860 [2024-11-29 09:47:56.409115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:28.860 [2024-11-29 09:47:56.409123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:28.860 [2024-11-29 09:47:56.409131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:28.860 [2024-11-29 09:47:56.409139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:28.861 [2024-11-29 09:47:56.409147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:28.861 [2024-11-29 09:47:56.409155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:28.861 [2024-11-29 09:47:56.409162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:28.861 [2024-11-29 09:47:56.409170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:28.861 [2024-11-29 09:47:56.409178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:28.861 [2024-11-29 09:47:56.409187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:28.861 [2024-11-29 09:47:56.409196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:28.861 [2024-11-29 09:47:56.409204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:28.861 [2024-11-29 09:47:56.409212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:28.861 [2024-11-29 09:47:56.409220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:28.861 [2024-11-29 09:47:56.409227] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:28.861 [2024-11-29 09:47:56.409236] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:28.861 [2024-11-29 09:47:56.409245] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:28.861 [2024-11-29 09:47:56.409253] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:28.861 [2024-11-29 09:47:56.409261] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:28.861 [2024-11-29 09:47:56.409268] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:28.861 [2024-11-29 09:47:56.409277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.861 [2024-11-29 09:47:56.409285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:28.861 [2024-11-29 09:47:56.409293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.717 ms 00:30:28.861 [2024-11-29 09:47:56.409305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.861 [2024-11-29 09:47:56.415389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.861 [2024-11-29 09:47:56.415413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:28.861 [2024-11-29 09:47:56.415425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.004 ms 00:30:28.861 [2024-11-29 09:47:56.415436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.861 [2024-11-29 09:47:56.415512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.861 [2024-11-29 09:47:56.415527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:28.861 [2024-11-29 09:47:56.415541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:30:28.861 [2024-11-29 09:47:56.415551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.861 [2024-11-29 09:47:56.434314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.861 [2024-11-29 09:47:56.434371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:28.861 [2024-11-29 09:47:56.434389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.708 ms 00:30:28.861 [2024-11-29 09:47:56.434401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.861 [2024-11-29 09:47:56.434464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.861 [2024-11-29 09:47:56.434479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:28.861 [2024-11-29 09:47:56.434491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:28.861 [2024-11-29 09:47:56.434508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.861 [2024-11-29 09:47:56.434661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.861 [2024-11-29 09:47:56.434686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:28.861 [2024-11-29 09:47:56.434701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:30:28.861 [2024-11-29 09:47:56.434713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.861 [2024-11-29 09:47:56.434892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.861 [2024-11-29 09:47:56.434920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:28.861 [2024-11-29 09:47:56.434935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:30:28.861 [2024-11-29 09:47:56.434947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.861 [2024-11-29 09:47:56.441164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.861 [2024-11-29 09:47:56.441210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:28.861 [2024-11-29 09:47:56.441224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.189 ms 00:30:28.861 [2024-11-29 09:47:56.441236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.861 [2024-11-29 09:47:56.441397] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:28.861 [2024-11-29 09:47:56.441423] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:28.861 [2024-11-29 09:47:56.441438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.861 [2024-11-29 09:47:56.441453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:28.861 [2024-11-29 09:47:56.441478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:30:28.861 [2024-11-29 09:47:56.441491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.861 [2024-11-29 09:47:56.455501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.861 [2024-11-29 09:47:56.455527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:28.861 [2024-11-29 09:47:56.455541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.981 ms 00:30:28.861 [2024-11-29 09:47:56.455548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.861 [2024-11-29 09:47:56.455664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.861 [2024-11-29 09:47:56.455676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:28.861 [2024-11-29 09:47:56.455688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:30:28.861 [2024-11-29 09:47:56.455695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.861 [2024-11-29 09:47:56.455737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.861 [2024-11-29 09:47:56.455746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:28.861 [2024-11-29 09:47:56.455758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:30:28.861 [2024-11-29 09:47:56.455765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.861 [2024-11-29 09:47:56.456058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.861 [2024-11-29 09:47:56.456067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:28.861 [2024-11-29 09:47:56.456076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:30:28.861 [2024-11-29 09:47:56.456084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.861 [2024-11-29 09:47:56.456098] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:30:28.861 [2024-11-29 09:47:56.456108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.861 [2024-11-29 09:47:56.456118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:28.861 [2024-11-29 09:47:56.456125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:30:28.861 [2024-11-29 09:47:56.456133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.861 [2024-11-29 09:47:56.464041] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:28.861 [2024-11-29 09:47:56.464167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.861 [2024-11-29 09:47:56.464176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:28.861 [2024-11-29 09:47:56.464185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.018 ms 00:30:28.861 [2024-11-29 09:47:56.464195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.861 [2024-11-29 09:47:56.466620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.861 [2024-11-29 09:47:56.466641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:28.861 [2024-11-29 09:47:56.466650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.403 ms 00:30:28.861 [2024-11-29 09:47:56.466659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.861 [2024-11-29 09:47:56.466726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.861 [2024-11-29 09:47:56.466735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:28.861 [2024-11-29 09:47:56.466743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:30:28.861 [2024-11-29 09:47:56.466752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.861 [2024-11-29 09:47:56.466789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.861 [2024-11-29 09:47:56.466798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:28.861 [2024-11-29 09:47:56.466806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:28.861 [2024-11-29 09:47:56.466812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.861 [2024-11-29 09:47:56.466840] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:28.861 [2024-11-29 09:47:56.466849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.861 [2024-11-29 09:47:56.466856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:28.861 [2024-11-29 09:47:56.466863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:30:28.861 [2024-11-29 09:47:56.466870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.861 [2024-11-29 09:47:56.471136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.861 [2024-11-29 09:47:56.471173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:28.861 [2024-11-29 09:47:56.471182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.245 ms 00:30:28.861 [2024-11-29 09:47:56.471189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.861 [2024-11-29 09:47:56.471256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.861 [2024-11-29 09:47:56.471265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:28.861 [2024-11-29 09:47:56.471273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:30:28.861 [2024-11-29 09:47:56.471283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.861 [2024-11-29 09:47:56.472120] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 66.468 ms, result 0 00:30:30.249  [2024-11-29T09:47:58.920Z] Copying: 10/1024 [MB] (10 MBps) [2024-11-29T09:47:59.866Z] Copying: 21/1024 [MB] (10 MBps) [2024-11-29T09:48:00.810Z] Copying: 32/1024 [MB] (11 MBps) [2024-11-29T09:48:01.755Z] Copying: 43/1024 [MB] (10 MBps) [2024-11-29T09:48:02.697Z] Copying: 54/1024 [MB] (11 MBps) [2024-11-29T09:48:04.091Z] Copying: 65/1024 [MB] (10 MBps) [2024-11-29T09:48:04.663Z] Copying: 76/1024 [MB] (10 MBps) [2024-11-29T09:48:06.052Z] Copying: 87/1024 [MB] (11 MBps) [2024-11-29T09:48:06.994Z] Copying: 98/1024 [MB] (11 MBps) [2024-11-29T09:48:07.937Z] Copying: 109/1024 [MB] (11 MBps) [2024-11-29T09:48:08.877Z] Copying: 122508/1048576 [kB] (10012 kBps) [2024-11-29T09:48:09.816Z] Copying: 130/1024 [MB] (11 MBps) [2024-11-29T09:48:10.761Z] Copying: 142/1024 [MB] (11 MBps) [2024-11-29T09:48:11.702Z] Copying: 155784/1048576 [kB] (10212 kBps) [2024-11-29T09:48:13.085Z] Copying: 163/1024 [MB] (11 MBps) [2024-11-29T09:48:13.658Z] Copying: 175/1024 [MB] (11 MBps) [2024-11-29T09:48:15.044Z] Copying: 189220/1048576 [kB] (9912 kBps) [2024-11-29T09:48:15.989Z] Copying: 195/1024 [MB] (11 MBps) [2024-11-29T09:48:16.951Z] Copying: 207/1024 [MB] (11 MBps) [2024-11-29T09:48:17.896Z] Copying: 222764/1048576 [kB] (9956 kBps) [2024-11-29T09:48:18.839Z] Copying: 228/1024 [MB] (11 MBps) [2024-11-29T09:48:19.778Z] Copying: 240/1024 [MB] (11 MBps) [2024-11-29T09:48:20.721Z] Copying: 255980/1048576 [kB] (10188 kBps) [2024-11-29T09:48:21.666Z] Copying: 261/1024 [MB] (11 MBps) [2024-11-29T09:48:23.050Z] Copying: 271/1024 [MB] (10 MBps) [2024-11-29T09:48:23.989Z] Copying: 282/1024 [MB] (10 MBps) [2024-11-29T09:48:24.929Z] Copying: 292/1024 [MB] (10 MBps) [2024-11-29T09:48:25.868Z] Copying: 303/1024 [MB] (10 MBps) [2024-11-29T09:48:26.809Z] Copying: 320692/1048576 [kB] (10100 kBps) [2024-11-29T09:48:27.751Z] Copying: 323/1024 [MB] (10 MBps) [2024-11-29T09:48:28.691Z] Copying: 333/1024 [MB] (10 MBps) [2024-11-29T09:48:30.069Z] Copying: 351936/1048576 [kB] (10112 kBps) [2024-11-29T09:48:31.006Z] Copying: 361872/1048576 [kB] (9936 kBps) [2024-11-29T09:48:31.945Z] Copying: 371608/1048576 [kB] (9736 kBps) [2024-11-29T09:48:32.885Z] Copying: 381428/1048576 [kB] (9820 kBps) [2024-11-29T09:48:33.824Z] Copying: 391232/1048576 [kB] (9804 kBps) [2024-11-29T09:48:34.766Z] Copying: 401148/1048576 [kB] (9916 kBps) [2024-11-29T09:48:35.707Z] Copying: 403/1024 [MB] (11 MBps) [2024-11-29T09:48:36.649Z] Copying: 414/1024 [MB] (11 MBps) [2024-11-29T09:48:38.031Z] Copying: 424/1024 [MB] (10 MBps) [2024-11-29T09:48:38.967Z] Copying: 444868/1048576 [kB] (10172 kBps) [2024-11-29T09:48:39.907Z] Copying: 454992/1048576 [kB] (10124 kBps) [2024-11-29T09:48:40.847Z] Copying: 454/1024 [MB] (10 MBps) [2024-11-29T09:48:41.788Z] Copying: 465/1024 [MB] (10 MBps) [2024-11-29T09:48:42.728Z] Copying: 476/1024 [MB] (11 MBps) [2024-11-29T09:48:43.669Z] Copying: 487/1024 [MB] (10 MBps) [2024-11-29T09:48:45.062Z] Copying: 498/1024 [MB] (11 MBps) [2024-11-29T09:48:46.004Z] Copying: 509/1024 [MB] (10 MBps) [2024-11-29T09:48:46.945Z] Copying: 520/1024 [MB] (11 MBps) [2024-11-29T09:48:47.890Z] Copying: 531/1024 [MB] (10 MBps) [2024-11-29T09:48:48.834Z] Copying: 543/1024 [MB] (11 MBps) [2024-11-29T09:48:49.776Z] Copying: 554/1024 [MB] (11 MBps) [2024-11-29T09:48:50.717Z] Copying: 566/1024 [MB] (11 MBps) [2024-11-29T09:48:51.692Z] Copying: 577/1024 [MB] (10 MBps) [2024-11-29T09:48:52.653Z] Copying: 587/1024 [MB] (10 MBps) [2024-11-29T09:48:54.037Z] Copying: 597/1024 [MB] (10 MBps) [2024-11-29T09:48:54.980Z] Copying: 607/1024 [MB] (10 MBps) [2024-11-29T09:48:55.935Z] Copying: 619/1024 [MB] (11 MBps) [2024-11-29T09:48:56.877Z] Copying: 631/1024 [MB] (12 MBps) [2024-11-29T09:48:57.817Z] Copying: 641/1024 [MB] (10 MBps) [2024-11-29T09:48:58.757Z] Copying: 652/1024 [MB] (11 MBps) [2024-11-29T09:48:59.698Z] Copying: 663/1024 [MB] (10 MBps) [2024-11-29T09:49:01.083Z] Copying: 673/1024 [MB] (10 MBps) [2024-11-29T09:49:01.650Z] Copying: 684/1024 [MB] (11 MBps) [2024-11-29T09:49:03.047Z] Copying: 696/1024 [MB] (12 MBps) [2024-11-29T09:49:03.990Z] Copying: 709/1024 [MB] (12 MBps) [2024-11-29T09:49:04.936Z] Copying: 721/1024 [MB] (11 MBps) [2024-11-29T09:49:05.883Z] Copying: 732/1024 [MB] (11 MBps) [2024-11-29T09:49:06.829Z] Copying: 743/1024 [MB] (10 MBps) [2024-11-29T09:49:07.819Z] Copying: 770784/1048576 [kB] (9672 kBps) [2024-11-29T09:49:08.761Z] Copying: 762/1024 [MB] (10 MBps) [2024-11-29T09:49:09.703Z] Copying: 791168/1048576 [kB] (10080 kBps) [2024-11-29T09:49:11.087Z] Copying: 801196/1048576 [kB] (10028 kBps) [2024-11-29T09:49:11.661Z] Copying: 792/1024 [MB] (10 MBps) [2024-11-29T09:49:13.071Z] Copying: 802/1024 [MB] (10 MBps) [2024-11-29T09:49:14.005Z] Copying: 814/1024 [MB] (11 MBps) [2024-11-29T09:49:14.949Z] Copying: 825/1024 [MB] (10 MBps) [2024-11-29T09:49:15.887Z] Copying: 835/1024 [MB] (10 MBps) [2024-11-29T09:49:16.880Z] Copying: 845/1024 [MB] (10 MBps) [2024-11-29T09:49:17.820Z] Copying: 857/1024 [MB] (11 MBps) [2024-11-29T09:49:18.762Z] Copying: 868/1024 [MB] (11 MBps) [2024-11-29T09:49:19.727Z] Copying: 879/1024 [MB] (10 MBps) [2024-11-29T09:49:20.670Z] Copying: 889/1024 [MB] (10 MBps) [2024-11-29T09:49:22.055Z] Copying: 900/1024 [MB] (10 MBps) [2024-11-29T09:49:23.001Z] Copying: 911/1024 [MB] (10 MBps) [2024-11-29T09:49:23.946Z] Copying: 921/1024 [MB] (10 MBps) [2024-11-29T09:49:24.889Z] Copying: 931/1024 [MB] (10 MBps) [2024-11-29T09:49:25.833Z] Copying: 942/1024 [MB] (10 MBps) [2024-11-29T09:49:26.882Z] Copying: 953/1024 [MB] (10 MBps) [2024-11-29T09:49:27.823Z] Copying: 963/1024 [MB] (10 MBps) [2024-11-29T09:49:28.762Z] Copying: 976/1024 [MB] (12 MBps) [2024-11-29T09:49:29.701Z] Copying: 988/1024 [MB] (12 MBps) [2024-11-29T09:49:31.087Z] Copying: 999/1024 [MB] (10 MBps) [2024-11-29T09:49:31.659Z] Copying: 1009/1024 [MB] (10 MBps) [2024-11-29T09:49:32.229Z] Copying: 1020/1024 [MB] (10 MBps) [2024-11-29T09:49:32.490Z] Copying: 1024/1024 [MB] (average 10 MBps)[2024-11-29 09:49:32.291526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.764 [2024-11-29 09:49:32.291602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:04.764 [2024-11-29 09:49:32.291616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:04.764 [2024-11-29 09:49:32.291625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.764 [2024-11-29 09:49:32.291651] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:04.764 [2024-11-29 09:49:32.292107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.764 [2024-11-29 09:49:32.292138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:04.764 [2024-11-29 09:49:32.292148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.441 ms 00:32:04.764 [2024-11-29 09:49:32.292156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.764 [2024-11-29 09:49:32.292380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.764 [2024-11-29 09:49:32.292390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:04.764 [2024-11-29 09:49:32.292400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.203 ms 00:32:04.764 [2024-11-29 09:49:32.292408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.764 [2024-11-29 09:49:32.292440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.764 [2024-11-29 09:49:32.292450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:04.764 [2024-11-29 09:49:32.292459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:04.764 [2024-11-29 09:49:32.292467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.764 [2024-11-29 09:49:32.292519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.764 [2024-11-29 09:49:32.292529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:04.764 [2024-11-29 09:49:32.292537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:32:04.764 [2024-11-29 09:49:32.292546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.765 [2024-11-29 09:49:32.292560] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:04.765 [2024-11-29 09:49:32.292579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.292602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.292612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.292621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.292630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.292639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.292648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.292657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.292666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.292675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.292684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.292693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.292742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.292751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.292761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.292770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.292779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.292787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.292796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.292805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.292813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.292821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.292828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.292836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.292844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.292852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.292860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.292868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.292875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.292889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.292896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.292904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.292912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.292919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.292927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.292934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.292942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.292949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.292957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.292964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.292971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.292979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.292986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.292994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.293003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.293011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.293019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.293026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.293034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.293041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.293049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.293056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.293064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.293071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.293078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.293085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.293093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.293100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.293108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.293115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.293123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.293130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.293137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.293145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.293152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.293166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.293177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.293411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.293419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.293427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.293435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.293443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.293450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.293458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.293465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.293473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.293482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.293491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.293499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:04.765 [2024-11-29 09:49:32.293506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:04.766 [2024-11-29 09:49:32.293514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:04.766 [2024-11-29 09:49:32.293521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:04.766 [2024-11-29 09:49:32.293529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:04.766 [2024-11-29 09:49:32.293537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:04.766 [2024-11-29 09:49:32.293545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:04.766 [2024-11-29 09:49:32.293553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:04.766 [2024-11-29 09:49:32.293560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:04.766 [2024-11-29 09:49:32.293567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:04.766 [2024-11-29 09:49:32.293575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:04.766 [2024-11-29 09:49:32.293582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:04.766 [2024-11-29 09:49:32.293607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:04.766 [2024-11-29 09:49:32.293615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:04.766 [2024-11-29 09:49:32.293623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:04.766 [2024-11-29 09:49:32.293631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:04.766 [2024-11-29 09:49:32.293639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:04.766 [2024-11-29 09:49:32.293647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:04.766 [2024-11-29 09:49:32.293656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:04.766 [2024-11-29 09:49:32.293663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:04.766 [2024-11-29 09:49:32.293671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:04.766 [2024-11-29 09:49:32.293678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:04.766 [2024-11-29 09:49:32.293694] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:04.766 [2024-11-29 09:49:32.293701] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f4181b08-3dfb-4e83-b6f4-aa68fb5795b9 00:32:04.766 [2024-11-29 09:49:32.293709] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:04.766 [2024-11-29 09:49:32.293723] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:32:04.766 [2024-11-29 09:49:32.293730] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:04.766 [2024-11-29 09:49:32.293741] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:04.766 [2024-11-29 09:49:32.293749] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:04.766 [2024-11-29 09:49:32.293756] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:04.766 [2024-11-29 09:49:32.293764] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:04.766 [2024-11-29 09:49:32.293772] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:04.766 [2024-11-29 09:49:32.293778] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:04.766 [2024-11-29 09:49:32.293786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.766 [2024-11-29 09:49:32.293794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:04.766 [2024-11-29 09:49:32.293804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.227 ms 00:32:04.766 [2024-11-29 09:49:32.293811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.766 [2024-11-29 09:49:32.295354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.766 [2024-11-29 09:49:32.295383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:04.766 [2024-11-29 09:49:32.295393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.525 ms 00:32:04.766 [2024-11-29 09:49:32.295401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.766 [2024-11-29 09:49:32.295477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.766 [2024-11-29 09:49:32.295492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:04.766 [2024-11-29 09:49:32.295501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:32:04.766 [2024-11-29 09:49:32.295508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.766 [2024-11-29 09:49:32.300789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.766 [2024-11-29 09:49:32.300819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:04.766 [2024-11-29 09:49:32.300828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.766 [2024-11-29 09:49:32.300839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.766 [2024-11-29 09:49:32.300892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.766 [2024-11-29 09:49:32.300906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:04.766 [2024-11-29 09:49:32.300914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.766 [2024-11-29 09:49:32.300921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.766 [2024-11-29 09:49:32.300949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.766 [2024-11-29 09:49:32.300958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:04.766 [2024-11-29 09:49:32.300966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.766 [2024-11-29 09:49:32.300973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.766 [2024-11-29 09:49:32.300988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.766 [2024-11-29 09:49:32.300995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:04.766 [2024-11-29 09:49:32.301005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.766 [2024-11-29 09:49:32.301012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.766 [2024-11-29 09:49:32.310499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.766 [2024-11-29 09:49:32.310539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:04.766 [2024-11-29 09:49:32.310549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.766 [2024-11-29 09:49:32.310556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.766 [2024-11-29 09:49:32.318134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.766 [2024-11-29 09:49:32.318175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:04.766 [2024-11-29 09:49:32.318192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.766 [2024-11-29 09:49:32.318200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.766 [2024-11-29 09:49:32.318247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.766 [2024-11-29 09:49:32.318261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:04.766 [2024-11-29 09:49:32.318269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.766 [2024-11-29 09:49:32.318277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.766 [2024-11-29 09:49:32.318300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.766 [2024-11-29 09:49:32.318308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:04.766 [2024-11-29 09:49:32.318316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.766 [2024-11-29 09:49:32.318325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.766 [2024-11-29 09:49:32.318370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.766 [2024-11-29 09:49:32.318379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:04.766 [2024-11-29 09:49:32.318393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.766 [2024-11-29 09:49:32.318400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.766 [2024-11-29 09:49:32.318421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.766 [2024-11-29 09:49:32.318430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:04.766 [2024-11-29 09:49:32.318440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.766 [2024-11-29 09:49:32.318447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.766 [2024-11-29 09:49:32.318481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.766 [2024-11-29 09:49:32.318490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:04.766 [2024-11-29 09:49:32.318498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.766 [2024-11-29 09:49:32.318504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.766 [2024-11-29 09:49:32.318541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.766 [2024-11-29 09:49:32.318550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:04.766 [2024-11-29 09:49:32.318558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.766 [2024-11-29 09:49:32.318568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.766 [2024-11-29 09:49:32.318699] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 27.142 ms, result 0 00:32:04.766 00:32:04.766 00:32:05.025 09:49:32 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:06.934 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:32:06.934 09:49:34 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:32:07.194 [2024-11-29 09:49:34.706268] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:32:07.194 [2024-11-29 09:49:34.706385] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98039 ] 00:32:07.194 [2024-11-29 09:49:34.838584] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:07.194 [2024-11-29 09:49:34.866790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:07.194 [2024-11-29 09:49:34.886563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:07.461 [2024-11-29 09:49:34.976699] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:07.461 [2024-11-29 09:49:34.976775] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:07.461 [2024-11-29 09:49:35.133715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.461 [2024-11-29 09:49:35.133764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:07.461 [2024-11-29 09:49:35.133780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:07.461 [2024-11-29 09:49:35.133789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.461 [2024-11-29 09:49:35.133843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.461 [2024-11-29 09:49:35.133857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:07.461 [2024-11-29 09:49:35.133869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:32:07.461 [2024-11-29 09:49:35.133879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.461 [2024-11-29 09:49:35.133898] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:07.461 [2024-11-29 09:49:35.134194] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:07.461 [2024-11-29 09:49:35.134223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.461 [2024-11-29 09:49:35.134236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:07.461 [2024-11-29 09:49:35.134248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:32:07.461 [2024-11-29 09:49:35.134256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.461 [2024-11-29 09:49:35.134496] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:32:07.461 [2024-11-29 09:49:35.134522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.461 [2024-11-29 09:49:35.134531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:07.461 [2024-11-29 09:49:35.134542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:32:07.461 [2024-11-29 09:49:35.134557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.461 [2024-11-29 09:49:35.134614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.461 [2024-11-29 09:49:35.134625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:07.461 [2024-11-29 09:49:35.134635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:32:07.461 [2024-11-29 09:49:35.134643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.461 [2024-11-29 09:49:35.134881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.461 [2024-11-29 09:49:35.134898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:07.461 [2024-11-29 09:49:35.134906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:32:07.461 [2024-11-29 09:49:35.134916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.461 [2024-11-29 09:49:35.135023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.461 [2024-11-29 09:49:35.135039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:07.461 [2024-11-29 09:49:35.135047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:32:07.461 [2024-11-29 09:49:35.135055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.461 [2024-11-29 09:49:35.135078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.461 [2024-11-29 09:49:35.135086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:07.461 [2024-11-29 09:49:35.135094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:32:07.461 [2024-11-29 09:49:35.135104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.461 [2024-11-29 09:49:35.135128] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:07.461 [2024-11-29 09:49:35.136572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.461 [2024-11-29 09:49:35.136617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:07.461 [2024-11-29 09:49:35.136627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.447 ms 00:32:07.461 [2024-11-29 09:49:35.136635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.461 [2024-11-29 09:49:35.136664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.461 [2024-11-29 09:49:35.136672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:07.461 [2024-11-29 09:49:35.136681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:32:07.461 [2024-11-29 09:49:35.136689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.461 [2024-11-29 09:49:35.136714] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:07.461 [2024-11-29 09:49:35.136732] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:07.461 [2024-11-29 09:49:35.136767] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:07.461 [2024-11-29 09:49:35.136782] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:07.461 [2024-11-29 09:49:35.136883] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:07.461 [2024-11-29 09:49:35.136894] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:07.461 [2024-11-29 09:49:35.136905] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:07.461 [2024-11-29 09:49:35.136924] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:07.461 [2024-11-29 09:49:35.136934] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:07.461 [2024-11-29 09:49:35.136942] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:07.461 [2024-11-29 09:49:35.136950] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:07.461 [2024-11-29 09:49:35.136959] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:07.461 [2024-11-29 09:49:35.136967] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:07.461 [2024-11-29 09:49:35.136979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.461 [2024-11-29 09:49:35.136987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:07.461 [2024-11-29 09:49:35.136996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:32:07.461 [2024-11-29 09:49:35.137003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.461 [2024-11-29 09:49:35.137086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.461 [2024-11-29 09:49:35.137097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:07.461 [2024-11-29 09:49:35.137105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:32:07.461 [2024-11-29 09:49:35.137113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.461 [2024-11-29 09:49:35.137221] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:07.461 [2024-11-29 09:49:35.137240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:07.461 [2024-11-29 09:49:35.137249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:07.461 [2024-11-29 09:49:35.137259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:07.462 [2024-11-29 09:49:35.137270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:07.462 [2024-11-29 09:49:35.137277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:07.462 [2024-11-29 09:49:35.137284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:07.462 [2024-11-29 09:49:35.137291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:07.462 [2024-11-29 09:49:35.137324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:07.462 [2024-11-29 09:49:35.137331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:07.462 [2024-11-29 09:49:35.137337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:07.462 [2024-11-29 09:49:35.137344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:07.462 [2024-11-29 09:49:35.137350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:07.462 [2024-11-29 09:49:35.137358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:07.462 [2024-11-29 09:49:35.137365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:07.462 [2024-11-29 09:49:35.137371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:07.462 [2024-11-29 09:49:35.137378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:07.462 [2024-11-29 09:49:35.137389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:07.462 [2024-11-29 09:49:35.137395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:07.462 [2024-11-29 09:49:35.137402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:07.462 [2024-11-29 09:49:35.137409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:07.462 [2024-11-29 09:49:35.137415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:07.462 [2024-11-29 09:49:35.137422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:07.462 [2024-11-29 09:49:35.137428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:07.462 [2024-11-29 09:49:35.137434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:07.462 [2024-11-29 09:49:35.137441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:07.462 [2024-11-29 09:49:35.137447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:07.462 [2024-11-29 09:49:35.137453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:07.462 [2024-11-29 09:49:35.137460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:07.462 [2024-11-29 09:49:35.137467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:07.462 [2024-11-29 09:49:35.137473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:07.462 [2024-11-29 09:49:35.137479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:07.462 [2024-11-29 09:49:35.137485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:07.462 [2024-11-29 09:49:35.137495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:07.462 [2024-11-29 09:49:35.137502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:07.462 [2024-11-29 09:49:35.137508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:07.462 [2024-11-29 09:49:35.137514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:07.462 [2024-11-29 09:49:35.137520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:07.462 [2024-11-29 09:49:35.137527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:07.462 [2024-11-29 09:49:35.137534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:07.462 [2024-11-29 09:49:35.137541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:07.462 [2024-11-29 09:49:35.137547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:07.462 [2024-11-29 09:49:35.137554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:07.462 [2024-11-29 09:49:35.137560] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:07.462 [2024-11-29 09:49:35.137567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:07.462 [2024-11-29 09:49:35.137576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:07.462 [2024-11-29 09:49:35.137597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:07.462 [2024-11-29 09:49:35.137606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:07.462 [2024-11-29 09:49:35.137613] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:07.462 [2024-11-29 09:49:35.137621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:07.462 [2024-11-29 09:49:35.137628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:07.462 [2024-11-29 09:49:35.137634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:07.462 [2024-11-29 09:49:35.137641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:07.462 [2024-11-29 09:49:35.137649] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:07.462 [2024-11-29 09:49:35.137658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:07.462 [2024-11-29 09:49:35.137666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:07.462 [2024-11-29 09:49:35.137674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:07.462 [2024-11-29 09:49:35.137681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:07.462 [2024-11-29 09:49:35.137688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:07.462 [2024-11-29 09:49:35.137695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:07.462 [2024-11-29 09:49:35.137702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:07.462 [2024-11-29 09:49:35.137709] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:07.462 [2024-11-29 09:49:35.137716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:07.462 [2024-11-29 09:49:35.137722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:07.462 [2024-11-29 09:49:35.137729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:07.462 [2024-11-29 09:49:35.137739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:07.462 [2024-11-29 09:49:35.137746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:07.462 [2024-11-29 09:49:35.137753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:07.462 [2024-11-29 09:49:35.137760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:07.462 [2024-11-29 09:49:35.137767] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:07.462 [2024-11-29 09:49:35.137776] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:07.462 [2024-11-29 09:49:35.137784] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:07.462 [2024-11-29 09:49:35.137791] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:07.462 [2024-11-29 09:49:35.137798] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:07.462 [2024-11-29 09:49:35.137805] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:07.462 [2024-11-29 09:49:35.137812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.462 [2024-11-29 09:49:35.137819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:07.462 [2024-11-29 09:49:35.137826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.659 ms 00:32:07.463 [2024-11-29 09:49:35.137834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.463 [2024-11-29 09:49:35.143897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.463 [2024-11-29 09:49:35.143928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:07.463 [2024-11-29 09:49:35.143938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.997 ms 00:32:07.463 [2024-11-29 09:49:35.143945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.463 [2024-11-29 09:49:35.144020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.463 [2024-11-29 09:49:35.144028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:07.463 [2024-11-29 09:49:35.144039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:32:07.463 [2024-11-29 09:49:35.144049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.463 [2024-11-29 09:49:35.162306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.463 [2024-11-29 09:49:35.162364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:07.463 [2024-11-29 09:49:35.162383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.221 ms 00:32:07.463 [2024-11-29 09:49:35.162395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.463 [2024-11-29 09:49:35.162453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.463 [2024-11-29 09:49:35.162468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:07.463 [2024-11-29 09:49:35.162481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:07.463 [2024-11-29 09:49:35.162492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.463 [2024-11-29 09:49:35.162646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.463 [2024-11-29 09:49:35.162665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:07.463 [2024-11-29 09:49:35.162680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:32:07.463 [2024-11-29 09:49:35.162693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.463 [2024-11-29 09:49:35.162872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.463 [2024-11-29 09:49:35.162906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:07.463 [2024-11-29 09:49:35.162921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:32:07.463 [2024-11-29 09:49:35.162939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.463 [2024-11-29 09:49:35.169155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.463 [2024-11-29 09:49:35.169204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:07.463 [2024-11-29 09:49:35.169218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.187 ms 00:32:07.463 [2024-11-29 09:49:35.169230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.463 [2024-11-29 09:49:35.169387] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:07.463 [2024-11-29 09:49:35.169407] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:07.463 [2024-11-29 09:49:35.169421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.463 [2024-11-29 09:49:35.169438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:07.463 [2024-11-29 09:49:35.169455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:32:07.463 [2024-11-29 09:49:35.169465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.785 [2024-11-29 09:49:35.183516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.785 [2024-11-29 09:49:35.183553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:07.785 [2024-11-29 09:49:35.183566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.028 ms 00:32:07.785 [2024-11-29 09:49:35.183574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.785 [2024-11-29 09:49:35.183695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.785 [2024-11-29 09:49:35.183704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:07.785 [2024-11-29 09:49:35.183715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:32:07.785 [2024-11-29 09:49:35.183722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.785 [2024-11-29 09:49:35.183768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.786 [2024-11-29 09:49:35.183777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:07.786 [2024-11-29 09:49:35.183785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:32:07.786 [2024-11-29 09:49:35.183797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.786 [2024-11-29 09:49:35.184095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.786 [2024-11-29 09:49:35.184113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:07.786 [2024-11-29 09:49:35.184123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:32:07.786 [2024-11-29 09:49:35.184131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.786 [2024-11-29 09:49:35.184146] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:32:07.786 [2024-11-29 09:49:35.184155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.786 [2024-11-29 09:49:35.184168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:07.786 [2024-11-29 09:49:35.184175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:32:07.786 [2024-11-29 09:49:35.184183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.786 [2024-11-29 09:49:35.192094] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:07.786 [2024-11-29 09:49:35.192222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.786 [2024-11-29 09:49:35.192232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:07.786 [2024-11-29 09:49:35.192241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.022 ms 00:32:07.786 [2024-11-29 09:49:35.192251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.786 [2024-11-29 09:49:35.194617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.786 [2024-11-29 09:49:35.194644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:07.786 [2024-11-29 09:49:35.194654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.348 ms 00:32:07.786 [2024-11-29 09:49:35.194664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.786 [2024-11-29 09:49:35.194728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.786 [2024-11-29 09:49:35.194738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:07.786 [2024-11-29 09:49:35.194747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:32:07.786 [2024-11-29 09:49:35.194762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.786 [2024-11-29 09:49:35.194798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.786 [2024-11-29 09:49:35.194808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:07.786 [2024-11-29 09:49:35.194822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:07.786 [2024-11-29 09:49:35.194830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.786 [2024-11-29 09:49:35.194860] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:07.786 [2024-11-29 09:49:35.194869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.786 [2024-11-29 09:49:35.194877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:07.786 [2024-11-29 09:49:35.194885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:32:07.786 [2024-11-29 09:49:35.194893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.786 [2024-11-29 09:49:35.199165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.786 [2024-11-29 09:49:35.199205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:07.786 [2024-11-29 09:49:35.199214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.250 ms 00:32:07.786 [2024-11-29 09:49:35.199224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.786 [2024-11-29 09:49:35.199291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.786 [2024-11-29 09:49:35.199301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:07.786 [2024-11-29 09:49:35.199308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:32:07.786 [2024-11-29 09:49:35.199318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.786 [2024-11-29 09:49:35.200207] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 66.098 ms, result 0 00:32:08.734  [2024-11-29T09:49:37.401Z] Copying: 13/1024 [MB] (13 MBps) [2024-11-29T09:49:38.343Z] Copying: 23/1024 [MB] (10 MBps) [2024-11-29T09:49:39.282Z] Copying: 35/1024 [MB] (11 MBps) [2024-11-29T09:49:40.225Z] Copying: 53/1024 [MB] (17 MBps) [2024-11-29T09:49:41.607Z] Copying: 63/1024 [MB] (10 MBps) [2024-11-29T09:49:42.552Z] Copying: 75/1024 [MB] (11 MBps) [2024-11-29T09:49:43.495Z] Copying: 86/1024 [MB] (10 MBps) [2024-11-29T09:49:44.439Z] Copying: 97/1024 [MB] (10 MBps) [2024-11-29T09:49:45.380Z] Copying: 108/1024 [MB] (11 MBps) [2024-11-29T09:49:46.313Z] Copying: 144/1024 [MB] (35 MBps) [2024-11-29T09:49:47.246Z] Copying: 190/1024 [MB] (45 MBps) [2024-11-29T09:49:48.618Z] Copying: 235/1024 [MB] (45 MBps) [2024-11-29T09:49:49.549Z] Copying: 279/1024 [MB] (44 MBps) [2024-11-29T09:49:50.482Z] Copying: 324/1024 [MB] (44 MBps) [2024-11-29T09:49:51.417Z] Copying: 366/1024 [MB] (42 MBps) [2024-11-29T09:49:52.436Z] Copying: 413/1024 [MB] (46 MBps) [2024-11-29T09:49:53.503Z] Copying: 455/1024 [MB] (42 MBps) [2024-11-29T09:49:54.440Z] Copying: 499/1024 [MB] (44 MBps) [2024-11-29T09:49:55.376Z] Copying: 532/1024 [MB] (33 MBps) [2024-11-29T09:49:56.311Z] Copying: 564/1024 [MB] (31 MBps) [2024-11-29T09:49:57.251Z] Copying: 588/1024 [MB] (24 MBps) [2024-11-29T09:49:58.637Z] Copying: 609/1024 [MB] (21 MBps) [2024-11-29T09:49:59.582Z] Copying: 625/1024 [MB] (15 MBps) [2024-11-29T09:50:00.526Z] Copying: 650308/1048576 [kB] (9680 kBps) [2024-11-29T09:50:01.468Z] Copying: 645/1024 [MB] (10 MBps) [2024-11-29T09:50:02.412Z] Copying: 656/1024 [MB] (10 MBps) [2024-11-29T09:50:03.394Z] Copying: 667/1024 [MB] (11 MBps) [2024-11-29T09:50:04.338Z] Copying: 678/1024 [MB] (10 MBps) [2024-11-29T09:50:05.280Z] Copying: 689/1024 [MB] (10 MBps) [2024-11-29T09:50:06.221Z] Copying: 699/1024 [MB] (10 MBps) [2024-11-29T09:50:07.640Z] Copying: 710/1024 [MB] (10 MBps) [2024-11-29T09:50:08.212Z] Copying: 737616/1048576 [kB] (10112 kBps) [2024-11-29T09:50:09.596Z] Copying: 731/1024 [MB] (11 MBps) [2024-11-29T09:50:10.540Z] Copying: 741/1024 [MB] (10 MBps) [2024-11-29T09:50:11.482Z] Copying: 752/1024 [MB] (10 MBps) [2024-11-29T09:50:12.421Z] Copying: 780344/1048576 [kB] (10240 kBps) [2024-11-29T09:50:13.359Z] Copying: 772/1024 [MB] (10 MBps) [2024-11-29T09:50:14.296Z] Copying: 782/1024 [MB] (10 MBps) [2024-11-29T09:50:15.263Z] Copying: 792/1024 [MB] (10 MBps) [2024-11-29T09:50:16.217Z] Copying: 803/1024 [MB] (10 MBps) [2024-11-29T09:50:17.594Z] Copying: 813/1024 [MB] (10 MBps) [2024-11-29T09:50:18.532Z] Copying: 824/1024 [MB] (10 MBps) [2024-11-29T09:50:19.469Z] Copying: 835/1024 [MB] (10 MBps) [2024-11-29T09:50:20.402Z] Copying: 847/1024 [MB] (12 MBps) [2024-11-29T09:50:21.338Z] Copying: 885/1024 [MB] (38 MBps) [2024-11-29T09:50:22.271Z] Copying: 923/1024 [MB] (38 MBps) [2024-11-29T09:50:23.221Z] Copying: 969/1024 [MB] (45 MBps) [2024-11-29T09:50:24.595Z] Copying: 1015/1024 [MB] (46 MBps) [2024-11-29T09:50:24.595Z] Copying: 1048376/1048576 [kB] (8080 kBps) [2024-11-29T09:50:24.595Z] Copying: 1024/1024 [MB] (average 20 MBps)[2024-11-29 09:50:24.431468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:56.869 [2024-11-29 09:50:24.431525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:56.869 [2024-11-29 09:50:24.431539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:56.869 [2024-11-29 09:50:24.431547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.869 [2024-11-29 09:50:24.432858] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:56.869 [2024-11-29 09:50:24.435372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:56.869 [2024-11-29 09:50:24.435405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:56.869 [2024-11-29 09:50:24.435415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.482 ms 00:32:56.869 [2024-11-29 09:50:24.435427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.869 [2024-11-29 09:50:24.444280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:56.869 [2024-11-29 09:50:24.444312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:56.869 [2024-11-29 09:50:24.444321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.549 ms 00:32:56.869 [2024-11-29 09:50:24.444329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.869 [2024-11-29 09:50:24.444355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:56.869 [2024-11-29 09:50:24.444364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:56.869 [2024-11-29 09:50:24.444372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:56.869 [2024-11-29 09:50:24.444379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.869 [2024-11-29 09:50:24.444426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:56.869 [2024-11-29 09:50:24.444434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:56.869 [2024-11-29 09:50:24.444442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:32:56.869 [2024-11-29 09:50:24.444449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.869 [2024-11-29 09:50:24.444462] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:56.869 [2024-11-29 09:50:24.444473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 129536 / 261120 wr_cnt: 1 state: open 00:32:56.869 [2024-11-29 09:50:24.444485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:56.869 [2024-11-29 09:50:24.444493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:56.869 [2024-11-29 09:50:24.444501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:56.869 [2024-11-29 09:50:24.444508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:56.869 [2024-11-29 09:50:24.444516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:56.869 [2024-11-29 09:50:24.444523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:56.869 [2024-11-29 09:50:24.444531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:56.869 [2024-11-29 09:50:24.444538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:56.869 [2024-11-29 09:50:24.444545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:56.869 [2024-11-29 09:50:24.444552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:56.869 [2024-11-29 09:50:24.444559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:56.869 [2024-11-29 09:50:24.444567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:56.869 [2024-11-29 09:50:24.444574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:56.869 [2024-11-29 09:50:24.444581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:56.869 [2024-11-29 09:50:24.444598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:56.869 [2024-11-29 09:50:24.444605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:56.869 [2024-11-29 09:50:24.444612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:56.869 [2024-11-29 09:50:24.444619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:56.869 [2024-11-29 09:50:24.444626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:56.869 [2024-11-29 09:50:24.444633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:56.869 [2024-11-29 09:50:24.444641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:56.869 [2024-11-29 09:50:24.444647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:56.869 [2024-11-29 09:50:24.444655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:56.869 [2024-11-29 09:50:24.444662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:56.869 [2024-11-29 09:50:24.444669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:56.869 [2024-11-29 09:50:24.444676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:56.869 [2024-11-29 09:50:24.444684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:56.869 [2024-11-29 09:50:24.444693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:56.869 [2024-11-29 09:50:24.444706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:56.869 [2024-11-29 09:50:24.444714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:56.869 [2024-11-29 09:50:24.444721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:56.869 [2024-11-29 09:50:24.444728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:56.869 [2024-11-29 09:50:24.444735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:56.869 [2024-11-29 09:50:24.444743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:56.869 [2024-11-29 09:50:24.444750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:56.869 [2024-11-29 09:50:24.444757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:56.869 [2024-11-29 09:50:24.444764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:56.869 [2024-11-29 09:50:24.444772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:56.869 [2024-11-29 09:50:24.444779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:56.869 [2024-11-29 09:50:24.444787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:56.869 [2024-11-29 09:50:24.444794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:56.869 [2024-11-29 09:50:24.444801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:56.869 [2024-11-29 09:50:24.444809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:56.869 [2024-11-29 09:50:24.444816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:56.869 [2024-11-29 09:50:24.444823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:56.869 [2024-11-29 09:50:24.444830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:56.869 [2024-11-29 09:50:24.444838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:56.869 [2024-11-29 09:50:24.444845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:56.869 [2024-11-29 09:50:24.444851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:56.869 [2024-11-29 09:50:24.444859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:56.869 [2024-11-29 09:50:24.444866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:56.869 [2024-11-29 09:50:24.444873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:56.869 [2024-11-29 09:50:24.444880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:56.869 [2024-11-29 09:50:24.444887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:56.869 [2024-11-29 09:50:24.444894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:56.869 [2024-11-29 09:50:24.444901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:56.869 [2024-11-29 09:50:24.444908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:56.869 [2024-11-29 09:50:24.444916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:56.869 [2024-11-29 09:50:24.444924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:56.870 [2024-11-29 09:50:24.444932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:56.870 [2024-11-29 09:50:24.444938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:56.870 [2024-11-29 09:50:24.444945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:56.870 [2024-11-29 09:50:24.444953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:56.870 [2024-11-29 09:50:24.444960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:56.870 [2024-11-29 09:50:24.444967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:56.870 [2024-11-29 09:50:24.444974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:56.870 [2024-11-29 09:50:24.444981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:56.870 [2024-11-29 09:50:24.444988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:56.870 [2024-11-29 09:50:24.444996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:56.870 [2024-11-29 09:50:24.445003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:56.870 [2024-11-29 09:50:24.445010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:56.870 [2024-11-29 09:50:24.445017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:56.870 [2024-11-29 09:50:24.445024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:56.870 [2024-11-29 09:50:24.445031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:56.870 [2024-11-29 09:50:24.445038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:56.870 [2024-11-29 09:50:24.445045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:56.870 [2024-11-29 09:50:24.445052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:56.870 [2024-11-29 09:50:24.445059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:56.870 [2024-11-29 09:50:24.445066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:56.870 [2024-11-29 09:50:24.445073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:56.870 [2024-11-29 09:50:24.445081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:56.870 [2024-11-29 09:50:24.445088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:56.870 [2024-11-29 09:50:24.445095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:56.870 [2024-11-29 09:50:24.445102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:56.870 [2024-11-29 09:50:24.445109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:56.870 [2024-11-29 09:50:24.445116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:56.870 [2024-11-29 09:50:24.445123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:56.870 [2024-11-29 09:50:24.445131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:56.870 [2024-11-29 09:50:24.445138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:56.870 [2024-11-29 09:50:24.445146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:56.870 [2024-11-29 09:50:24.445153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:56.870 [2024-11-29 09:50:24.445161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:56.870 [2024-11-29 09:50:24.445168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:56.870 [2024-11-29 09:50:24.445174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:56.870 [2024-11-29 09:50:24.445181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:56.870 [2024-11-29 09:50:24.445226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:56.870 [2024-11-29 09:50:24.445233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:56.870 [2024-11-29 09:50:24.445241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:56.870 [2024-11-29 09:50:24.445248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:56.870 [2024-11-29 09:50:24.445264] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:56.870 [2024-11-29 09:50:24.445272] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f4181b08-3dfb-4e83-b6f4-aa68fb5795b9 00:32:56.870 [2024-11-29 09:50:24.445279] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 129536 00:32:56.870 [2024-11-29 09:50:24.445311] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 129568 00:32:56.870 [2024-11-29 09:50:24.445318] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 129536 00:32:56.870 [2024-11-29 09:50:24.445328] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:32:56.870 [2024-11-29 09:50:24.445336] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:56.870 [2024-11-29 09:50:24.445343] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:56.870 [2024-11-29 09:50:24.445350] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:56.870 [2024-11-29 09:50:24.445357] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:56.870 [2024-11-29 09:50:24.445363] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:56.870 [2024-11-29 09:50:24.445369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:56.870 [2024-11-29 09:50:24.445377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:56.870 [2024-11-29 09:50:24.445385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.908 ms 00:32:56.870 [2024-11-29 09:50:24.445391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.870 [2024-11-29 09:50:24.446766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:56.870 [2024-11-29 09:50:24.446794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:56.870 [2024-11-29 09:50:24.446803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.359 ms 00:32:56.870 [2024-11-29 09:50:24.446810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.870 [2024-11-29 09:50:24.446880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:56.870 [2024-11-29 09:50:24.446888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:56.870 [2024-11-29 09:50:24.446895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:32:56.870 [2024-11-29 09:50:24.446907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.870 [2024-11-29 09:50:24.451514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:56.870 [2024-11-29 09:50:24.451549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:56.870 [2024-11-29 09:50:24.451558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:56.870 [2024-11-29 09:50:24.451565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.870 [2024-11-29 09:50:24.451624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:56.870 [2024-11-29 09:50:24.451633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:56.870 [2024-11-29 09:50:24.451640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:56.870 [2024-11-29 09:50:24.451647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.870 [2024-11-29 09:50:24.451691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:56.870 [2024-11-29 09:50:24.451703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:56.870 [2024-11-29 09:50:24.451710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:56.870 [2024-11-29 09:50:24.451717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.870 [2024-11-29 09:50:24.451732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:56.870 [2024-11-29 09:50:24.451739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:56.870 [2024-11-29 09:50:24.451746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:56.870 [2024-11-29 09:50:24.451753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.870 [2024-11-29 09:50:24.459997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:56.870 [2024-11-29 09:50:24.460035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:56.870 [2024-11-29 09:50:24.460045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:56.870 [2024-11-29 09:50:24.460056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.870 [2024-11-29 09:50:24.467216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:56.870 [2024-11-29 09:50:24.467254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:56.870 [2024-11-29 09:50:24.467264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:56.870 [2024-11-29 09:50:24.467273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.870 [2024-11-29 09:50:24.467295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:56.870 [2024-11-29 09:50:24.467303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:56.870 [2024-11-29 09:50:24.467319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:56.870 [2024-11-29 09:50:24.467326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.870 [2024-11-29 09:50:24.467364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:56.870 [2024-11-29 09:50:24.467373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:56.870 [2024-11-29 09:50:24.467380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:56.870 [2024-11-29 09:50:24.467387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.870 [2024-11-29 09:50:24.467431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:56.870 [2024-11-29 09:50:24.467441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:56.870 [2024-11-29 09:50:24.467453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:56.870 [2024-11-29 09:50:24.467463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.870 [2024-11-29 09:50:24.467485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:56.870 [2024-11-29 09:50:24.467493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:56.870 [2024-11-29 09:50:24.467500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:56.871 [2024-11-29 09:50:24.467507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.871 [2024-11-29 09:50:24.467542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:56.871 [2024-11-29 09:50:24.467550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:56.871 [2024-11-29 09:50:24.467558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:56.871 [2024-11-29 09:50:24.467567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.871 [2024-11-29 09:50:24.467613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:56.871 [2024-11-29 09:50:24.467628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:56.871 [2024-11-29 09:50:24.467635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:56.871 [2024-11-29 09:50:24.467643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.871 [2024-11-29 09:50:24.467749] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 39.064 ms, result 0 00:32:58.769 00:32:58.769 00:32:58.769 09:50:25 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:32:58.769 [2024-11-29 09:50:26.047942] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:32:58.769 [2024-11-29 09:50:26.048375] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98553 ] 00:32:58.769 [2024-11-29 09:50:26.180380] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:58.769 [2024-11-29 09:50:26.210967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:58.769 [2024-11-29 09:50:26.234225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:58.769 [2024-11-29 09:50:26.326453] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:58.769 [2024-11-29 09:50:26.326536] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:58.769 [2024-11-29 09:50:26.480966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:58.769 [2024-11-29 09:50:26.481023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:58.769 [2024-11-29 09:50:26.481039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:58.769 [2024-11-29 09:50:26.481053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:58.769 [2024-11-29 09:50:26.481107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:58.769 [2024-11-29 09:50:26.481123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:58.769 [2024-11-29 09:50:26.481133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:32:58.769 [2024-11-29 09:50:26.481145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:58.769 [2024-11-29 09:50:26.481169] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:58.769 [2024-11-29 09:50:26.481455] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:58.769 [2024-11-29 09:50:26.481478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:58.769 [2024-11-29 09:50:26.481489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:58.769 [2024-11-29 09:50:26.481500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:32:58.769 [2024-11-29 09:50:26.481507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:58.769 [2024-11-29 09:50:26.481899] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:32:58.769 [2024-11-29 09:50:26.481935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:58.769 [2024-11-29 09:50:26.481945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:58.769 [2024-11-29 09:50:26.481963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:32:58.769 [2024-11-29 09:50:26.481979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:58.769 [2024-11-29 09:50:26.482026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:58.769 [2024-11-29 09:50:26.482037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:58.769 [2024-11-29 09:50:26.482047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:32:58.769 [2024-11-29 09:50:26.482057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:58.769 [2024-11-29 09:50:26.482340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:58.769 [2024-11-29 09:50:26.482367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:58.769 [2024-11-29 09:50:26.482378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:32:58.769 [2024-11-29 09:50:26.482394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:58.769 [2024-11-29 09:50:26.482479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:58.769 [2024-11-29 09:50:26.482496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:58.769 [2024-11-29 09:50:26.482506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:32:58.769 [2024-11-29 09:50:26.482516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:58.769 [2024-11-29 09:50:26.482549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:58.769 [2024-11-29 09:50:26.482560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:58.769 [2024-11-29 09:50:26.482570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:32:58.769 [2024-11-29 09:50:26.482580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:58.769 [2024-11-29 09:50:26.482628] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:58.769 [2024-11-29 09:50:26.484197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:58.769 [2024-11-29 09:50:26.484232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:58.769 [2024-11-29 09:50:26.484251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.574 ms 00:32:58.769 [2024-11-29 09:50:26.484262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:58.769 [2024-11-29 09:50:26.484298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:58.769 [2024-11-29 09:50:26.484308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:58.769 [2024-11-29 09:50:26.484323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:32:58.769 [2024-11-29 09:50:26.484333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:58.769 [2024-11-29 09:50:26.484372] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:58.769 [2024-11-29 09:50:26.484397] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:58.769 [2024-11-29 09:50:26.484436] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:58.769 [2024-11-29 09:50:26.484458] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:58.769 [2024-11-29 09:50:26.484567] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:58.769 [2024-11-29 09:50:26.484607] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:58.769 [2024-11-29 09:50:26.484625] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:58.769 [2024-11-29 09:50:26.484641] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:58.769 [2024-11-29 09:50:26.484661] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:58.769 [2024-11-29 09:50:26.484671] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:58.769 [2024-11-29 09:50:26.484682] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:58.769 [2024-11-29 09:50:26.484695] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:58.769 [2024-11-29 09:50:26.484705] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:58.769 [2024-11-29 09:50:26.484718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:58.769 [2024-11-29 09:50:26.484727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:58.769 [2024-11-29 09:50:26.484737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.349 ms 00:32:58.769 [2024-11-29 09:50:26.484746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:58.769 [2024-11-29 09:50:26.484847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:58.769 [2024-11-29 09:50:26.484869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:58.769 [2024-11-29 09:50:26.484878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:32:58.769 [2024-11-29 09:50:26.484891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:58.769 [2024-11-29 09:50:26.485036] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:58.769 [2024-11-29 09:50:26.485053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:58.769 [2024-11-29 09:50:26.485064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:58.769 [2024-11-29 09:50:26.485078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:58.769 [2024-11-29 09:50:26.485091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:58.769 [2024-11-29 09:50:26.485100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:58.769 [2024-11-29 09:50:26.485109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:58.769 [2024-11-29 09:50:26.485118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:58.769 [2024-11-29 09:50:26.485137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:58.769 [2024-11-29 09:50:26.485147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:58.769 [2024-11-29 09:50:26.485158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:58.769 [2024-11-29 09:50:26.485165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:58.769 [2024-11-29 09:50:26.485171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:58.769 [2024-11-29 09:50:26.485177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:58.769 [2024-11-29 09:50:26.485184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:58.769 [2024-11-29 09:50:26.485190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:58.770 [2024-11-29 09:50:26.485197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:58.770 [2024-11-29 09:50:26.485203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:58.770 [2024-11-29 09:50:26.485210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:58.770 [2024-11-29 09:50:26.485218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:58.770 [2024-11-29 09:50:26.485231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:58.770 [2024-11-29 09:50:26.485238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:58.770 [2024-11-29 09:50:26.485245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:58.770 [2024-11-29 09:50:26.485252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:58.770 [2024-11-29 09:50:26.485260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:58.770 [2024-11-29 09:50:26.485269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:58.770 [2024-11-29 09:50:26.485278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:58.770 [2024-11-29 09:50:26.485297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:58.770 [2024-11-29 09:50:26.485305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:58.770 [2024-11-29 09:50:26.485315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:58.770 [2024-11-29 09:50:26.485323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:58.770 [2024-11-29 09:50:26.485332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:58.770 [2024-11-29 09:50:26.485340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:58.770 [2024-11-29 09:50:26.485349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:58.770 [2024-11-29 09:50:26.485357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:58.770 [2024-11-29 09:50:26.485366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:58.770 [2024-11-29 09:50:26.485376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:58.770 [2024-11-29 09:50:26.485385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:58.770 [2024-11-29 09:50:26.485394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:58.770 [2024-11-29 09:50:26.485402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:58.770 [2024-11-29 09:50:26.485411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:58.770 [2024-11-29 09:50:26.485419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:58.770 [2024-11-29 09:50:26.485428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:58.770 [2024-11-29 09:50:26.485436] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:58.770 [2024-11-29 09:50:26.485446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:58.770 [2024-11-29 09:50:26.485458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:58.770 [2024-11-29 09:50:26.485471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:58.770 [2024-11-29 09:50:26.485481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:58.770 [2024-11-29 09:50:26.485490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:58.770 [2024-11-29 09:50:26.485498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:58.770 [2024-11-29 09:50:26.485507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:58.770 [2024-11-29 09:50:26.485516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:58.770 [2024-11-29 09:50:26.485527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:58.770 [2024-11-29 09:50:26.485539] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:58.770 [2024-11-29 09:50:26.485550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:58.770 [2024-11-29 09:50:26.485561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:58.770 [2024-11-29 09:50:26.485571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:58.770 [2024-11-29 09:50:26.485580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:58.770 [2024-11-29 09:50:26.485602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:58.770 [2024-11-29 09:50:26.485612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:58.770 [2024-11-29 09:50:26.485621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:58.770 [2024-11-29 09:50:26.485630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:58.770 [2024-11-29 09:50:26.485639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:58.770 [2024-11-29 09:50:26.485648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:58.770 [2024-11-29 09:50:26.485657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:58.770 [2024-11-29 09:50:26.485667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:58.770 [2024-11-29 09:50:26.485677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:58.770 [2024-11-29 09:50:26.485686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:58.770 [2024-11-29 09:50:26.485698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:58.770 [2024-11-29 09:50:26.485708] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:58.770 [2024-11-29 09:50:26.485718] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:58.770 [2024-11-29 09:50:26.485729] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:58.770 [2024-11-29 09:50:26.485738] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:58.770 [2024-11-29 09:50:26.485747] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:58.770 [2024-11-29 09:50:26.485757] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:58.770 [2024-11-29 09:50:26.485767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:58.770 [2024-11-29 09:50:26.485776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:58.770 [2024-11-29 09:50:26.485789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.813 ms 00:32:58.770 [2024-11-29 09:50:26.485798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:58.770 [2024-11-29 09:50:26.492069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.029 [2024-11-29 09:50:26.492107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:59.029 [2024-11-29 09:50:26.492120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.216 ms 00:32:59.029 [2024-11-29 09:50:26.492129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.029 [2024-11-29 09:50:26.492208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.029 [2024-11-29 09:50:26.492217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:59.029 [2024-11-29 09:50:26.492224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:32:59.029 [2024-11-29 09:50:26.492231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.029 [2024-11-29 09:50:26.511319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.029 [2024-11-29 09:50:26.511383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:59.029 [2024-11-29 09:50:26.511413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.044 ms 00:32:59.029 [2024-11-29 09:50:26.511428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.029 [2024-11-29 09:50:26.511490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.029 [2024-11-29 09:50:26.511516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:59.029 [2024-11-29 09:50:26.511531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:59.029 [2024-11-29 09:50:26.511548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.029 [2024-11-29 09:50:26.511734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.029 [2024-11-29 09:50:26.511766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:59.029 [2024-11-29 09:50:26.511798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:32:59.030 [2024-11-29 09:50:26.511818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.030 [2024-11-29 09:50:26.512020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.030 [2024-11-29 09:50:26.512049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:59.030 [2024-11-29 09:50:26.512065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.168 ms 00:32:59.030 [2024-11-29 09:50:26.512079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.030 [2024-11-29 09:50:26.518478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.030 [2024-11-29 09:50:26.518520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:59.030 [2024-11-29 09:50:26.518532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.363 ms 00:32:59.030 [2024-11-29 09:50:26.518542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.030 [2024-11-29 09:50:26.518672] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:32:59.030 [2024-11-29 09:50:26.518699] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:59.030 [2024-11-29 09:50:26.518715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.030 [2024-11-29 09:50:26.518726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:59.030 [2024-11-29 09:50:26.518739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:32:59.030 [2024-11-29 09:50:26.518749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.030 [2024-11-29 09:50:26.533432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.030 [2024-11-29 09:50:26.533480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:59.030 [2024-11-29 09:50:26.533493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.663 ms 00:32:59.030 [2024-11-29 09:50:26.533502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.030 [2024-11-29 09:50:26.533642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.030 [2024-11-29 09:50:26.533665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:59.030 [2024-11-29 09:50:26.533679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:32:59.030 [2024-11-29 09:50:26.533693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.030 [2024-11-29 09:50:26.533743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.030 [2024-11-29 09:50:26.533758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:59.030 [2024-11-29 09:50:26.533766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:32:59.030 [2024-11-29 09:50:26.533774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.030 [2024-11-29 09:50:26.534092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.030 [2024-11-29 09:50:26.534112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:59.030 [2024-11-29 09:50:26.534216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:32:59.030 [2024-11-29 09:50:26.534225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.030 [2024-11-29 09:50:26.534243] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:32:59.030 [2024-11-29 09:50:26.534258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.030 [2024-11-29 09:50:26.534270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:59.030 [2024-11-29 09:50:26.534280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:32:59.030 [2024-11-29 09:50:26.534289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.030 [2024-11-29 09:50:26.542528] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:59.030 [2024-11-29 09:50:26.542692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.030 [2024-11-29 09:50:26.542704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:59.030 [2024-11-29 09:50:26.542715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.382 ms 00:32:59.030 [2024-11-29 09:50:26.542727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.030 [2024-11-29 09:50:26.545160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.030 [2024-11-29 09:50:26.545193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:59.030 [2024-11-29 09:50:26.545205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.410 ms 00:32:59.030 [2024-11-29 09:50:26.545215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.030 [2024-11-29 09:50:26.545270] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:32:59.030 [2024-11-29 09:50:26.545976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.030 [2024-11-29 09:50:26.546003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:59.030 [2024-11-29 09:50:26.546020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.719 ms 00:32:59.030 [2024-11-29 09:50:26.546030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.030 [2024-11-29 09:50:26.546075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.030 [2024-11-29 09:50:26.546087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:59.030 [2024-11-29 09:50:26.546101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:59.030 [2024-11-29 09:50:26.546111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.030 [2024-11-29 09:50:26.546147] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:59.030 [2024-11-29 09:50:26.546166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.030 [2024-11-29 09:50:26.546176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:59.030 [2024-11-29 09:50:26.546186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:32:59.030 [2024-11-29 09:50:26.546199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.030 [2024-11-29 09:50:26.549649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.030 [2024-11-29 09:50:26.549689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:59.030 [2024-11-29 09:50:26.549700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.424 ms 00:32:59.030 [2024-11-29 09:50:26.549710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.030 [2024-11-29 09:50:26.549825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.030 [2024-11-29 09:50:26.549843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:59.030 [2024-11-29 09:50:26.549852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:32:59.030 [2024-11-29 09:50:26.549859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.030 [2024-11-29 09:50:26.550859] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 69.426 ms, result 0 00:33:00.407  [2024-11-29T09:50:29.076Z] Copying: 40/1024 [MB] (40 MBps) [2024-11-29T09:50:30.016Z] Copying: 54/1024 [MB] (13 MBps) [2024-11-29T09:50:31.062Z] Copying: 69/1024 [MB] (14 MBps) [2024-11-29T09:50:31.994Z] Copying: 84/1024 [MB] (15 MBps) [2024-11-29T09:50:32.925Z] Copying: 120/1024 [MB] (35 MBps) [2024-11-29T09:50:33.856Z] Copying: 171/1024 [MB] (51 MBps) [2024-11-29T09:50:34.789Z] Copying: 217/1024 [MB] (46 MBps) [2024-11-29T09:50:36.172Z] Copying: 267/1024 [MB] (49 MBps) [2024-11-29T09:50:36.762Z] Copying: 313/1024 [MB] (45 MBps) [2024-11-29T09:50:38.135Z] Copying: 360/1024 [MB] (46 MBps) [2024-11-29T09:50:39.069Z] Copying: 411/1024 [MB] (50 MBps) [2024-11-29T09:50:40.002Z] Copying: 461/1024 [MB] (50 MBps) [2024-11-29T09:50:40.935Z] Copying: 511/1024 [MB] (50 MBps) [2024-11-29T09:50:41.867Z] Copying: 560/1024 [MB] (48 MBps) [2024-11-29T09:50:42.824Z] Copying: 611/1024 [MB] (51 MBps) [2024-11-29T09:50:43.759Z] Copying: 660/1024 [MB] (48 MBps) [2024-11-29T09:50:45.136Z] Copying: 703/1024 [MB] (43 MBps) [2024-11-29T09:50:46.067Z] Copying: 746/1024 [MB] (42 MBps) [2024-11-29T09:50:47.000Z] Copying: 785/1024 [MB] (39 MBps) [2024-11-29T09:50:47.936Z] Copying: 835/1024 [MB] (49 MBps) [2024-11-29T09:50:48.942Z] Copying: 884/1024 [MB] (49 MBps) [2024-11-29T09:50:49.875Z] Copying: 922/1024 [MB] (37 MBps) [2024-11-29T09:50:50.808Z] Copying: 968/1024 [MB] (46 MBps) [2024-11-29T09:50:51.067Z] Copying: 1016/1024 [MB] (48 MBps) [2024-11-29T09:50:51.067Z] Copying: 1024/1024 [MB] (average 42 MBps)[2024-11-29 09:50:50.908914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:23.341 [2024-11-29 09:50:50.908979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:23.341 [2024-11-29 09:50:50.908992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:23.341 [2024-11-29 09:50:50.909000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.341 [2024-11-29 09:50:50.909025] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:23.341 [2024-11-29 09:50:50.909499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:23.341 [2024-11-29 09:50:50.909532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:23.341 [2024-11-29 09:50:50.909547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.456 ms 00:33:23.341 [2024-11-29 09:50:50.909554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.341 [2024-11-29 09:50:50.909775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:23.341 [2024-11-29 09:50:50.909793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:23.341 [2024-11-29 09:50:50.909807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.203 ms 00:33:23.341 [2024-11-29 09:50:50.909815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.341 [2024-11-29 09:50:50.909842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:23.341 [2024-11-29 09:50:50.909852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:33:23.341 [2024-11-29 09:50:50.909860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:23.341 [2024-11-29 09:50:50.909869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.341 [2024-11-29 09:50:50.909922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:23.341 [2024-11-29 09:50:50.909931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:33:23.341 [2024-11-29 09:50:50.909940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:33:23.341 [2024-11-29 09:50:50.909949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.341 [2024-11-29 09:50:50.909963] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:23.341 [2024-11-29 09:50:50.909978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:33:23.342 [2024-11-29 09:50:50.909989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.909998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:23.342 [2024-11-29 09:50:50.910735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:23.343 [2024-11-29 09:50:50.910743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:23.343 [2024-11-29 09:50:50.910750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:23.343 [2024-11-29 09:50:50.910757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:23.343 [2024-11-29 09:50:50.910764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:23.343 [2024-11-29 09:50:50.910771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:23.343 [2024-11-29 09:50:50.910778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:23.343 [2024-11-29 09:50:50.910785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:23.343 [2024-11-29 09:50:50.910792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:23.343 [2024-11-29 09:50:50.910799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:23.343 [2024-11-29 09:50:50.910806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:23.343 [2024-11-29 09:50:50.910813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:23.343 [2024-11-29 09:50:50.910820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:23.343 [2024-11-29 09:50:50.910835] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:23.343 [2024-11-29 09:50:50.910842] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f4181b08-3dfb-4e83-b6f4-aa68fb5795b9 00:33:23.343 [2024-11-29 09:50:50.910849] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:33:23.343 [2024-11-29 09:50:50.910856] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 1568 00:33:23.343 [2024-11-29 09:50:50.910863] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 1536 00:33:23.343 [2024-11-29 09:50:50.910875] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0208 00:33:23.343 [2024-11-29 09:50:50.910881] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:23.343 [2024-11-29 09:50:50.910888] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:23.343 [2024-11-29 09:50:50.910896] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:23.343 [2024-11-29 09:50:50.910902] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:23.343 [2024-11-29 09:50:50.910908] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:23.343 [2024-11-29 09:50:50.910915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:23.343 [2024-11-29 09:50:50.910922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:23.343 [2024-11-29 09:50:50.910930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.953 ms 00:33:23.343 [2024-11-29 09:50:50.910943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.343 [2024-11-29 09:50:50.912791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:23.343 [2024-11-29 09:50:50.912936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:23.343 [2024-11-29 09:50:50.912945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.834 ms 00:33:23.343 [2024-11-29 09:50:50.912952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.343 [2024-11-29 09:50:50.913029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:23.343 [2024-11-29 09:50:50.913037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:23.343 [2024-11-29 09:50:50.913044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:33:23.343 [2024-11-29 09:50:50.913051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.343 [2024-11-29 09:50:50.918772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:23.343 [2024-11-29 09:50:50.918802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:23.343 [2024-11-29 09:50:50.918811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:23.343 [2024-11-29 09:50:50.918819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.343 [2024-11-29 09:50:50.918877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:23.343 [2024-11-29 09:50:50.918886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:23.343 [2024-11-29 09:50:50.918893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:23.343 [2024-11-29 09:50:50.918900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.343 [2024-11-29 09:50:50.918944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:23.343 [2024-11-29 09:50:50.918954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:23.343 [2024-11-29 09:50:50.918962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:23.343 [2024-11-29 09:50:50.918969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.343 [2024-11-29 09:50:50.918983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:23.343 [2024-11-29 09:50:50.918990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:23.343 [2024-11-29 09:50:50.918999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:23.343 [2024-11-29 09:50:50.919007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.343 [2024-11-29 09:50:50.928965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:23.343 [2024-11-29 09:50:50.929011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:23.343 [2024-11-29 09:50:50.929021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:23.343 [2024-11-29 09:50:50.929029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.343 [2024-11-29 09:50:50.937003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:23.343 [2024-11-29 09:50:50.937044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:23.343 [2024-11-29 09:50:50.937054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:23.343 [2024-11-29 09:50:50.937062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.343 [2024-11-29 09:50:50.937100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:23.343 [2024-11-29 09:50:50.937114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:23.343 [2024-11-29 09:50:50.937123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:23.343 [2024-11-29 09:50:50.937130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.343 [2024-11-29 09:50:50.937152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:23.343 [2024-11-29 09:50:50.937160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:23.343 [2024-11-29 09:50:50.937168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:23.343 [2024-11-29 09:50:50.937175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.343 [2024-11-29 09:50:50.937221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:23.343 [2024-11-29 09:50:50.937230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:23.343 [2024-11-29 09:50:50.937246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:23.343 [2024-11-29 09:50:50.937253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.343 [2024-11-29 09:50:50.937283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:23.343 [2024-11-29 09:50:50.937292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:23.343 [2024-11-29 09:50:50.937299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:23.343 [2024-11-29 09:50:50.937306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.343 [2024-11-29 09:50:50.937337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:23.343 [2024-11-29 09:50:50.937345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:23.343 [2024-11-29 09:50:50.937355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:23.343 [2024-11-29 09:50:50.937361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.343 [2024-11-29 09:50:50.937398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:23.343 [2024-11-29 09:50:50.937407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:23.343 [2024-11-29 09:50:50.937415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:23.343 [2024-11-29 09:50:50.937422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.343 [2024-11-29 09:50:50.937534] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 28.596 ms, result 0 00:33:23.602 00:33:23.602 00:33:23.602 09:50:51 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:26.174 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:33:26.174 09:50:53 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:33:26.174 09:50:53 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:33:26.174 09:50:53 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:33:26.174 09:50:53 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:26.174 09:50:53 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:33:26.174 09:50:53 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 96254 00:33:26.174 09:50:53 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 96254 ']' 00:33:26.174 09:50:53 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 96254 00:33:26.174 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (96254) - No such process 00:33:26.174 Process with pid 96254 is not found 00:33:26.174 09:50:53 ftl.ftl_restore_fast -- common/autotest_common.sh@981 -- # echo 'Process with pid 96254 is not found' 00:33:26.174 09:50:53 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:33:26.174 Remove shared memory files 00:33:26.174 09:50:53 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:26.174 09:50:53 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:33:26.174 09:50:53 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_f4181b08-3dfb-4e83-b6f4-aa68fb5795b9_band_md /dev/hugepages/ftl_f4181b08-3dfb-4e83-b6f4-aa68fb5795b9_l2p_l1 /dev/hugepages/ftl_f4181b08-3dfb-4e83-b6f4-aa68fb5795b9_l2p_l2 /dev/hugepages/ftl_f4181b08-3dfb-4e83-b6f4-aa68fb5795b9_l2p_l2_ctx /dev/hugepages/ftl_f4181b08-3dfb-4e83-b6f4-aa68fb5795b9_nvc_md /dev/hugepages/ftl_f4181b08-3dfb-4e83-b6f4-aa68fb5795b9_p2l_pool /dev/hugepages/ftl_f4181b08-3dfb-4e83-b6f4-aa68fb5795b9_sb /dev/hugepages/ftl_f4181b08-3dfb-4e83-b6f4-aa68fb5795b9_sb_shm /dev/hugepages/ftl_f4181b08-3dfb-4e83-b6f4-aa68fb5795b9_trim_bitmap /dev/hugepages/ftl_f4181b08-3dfb-4e83-b6f4-aa68fb5795b9_trim_log /dev/hugepages/ftl_f4181b08-3dfb-4e83-b6f4-aa68fb5795b9_trim_md /dev/hugepages/ftl_f4181b08-3dfb-4e83-b6f4-aa68fb5795b9_vmap 00:33:26.174 09:50:53 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:33:26.174 09:50:53 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:26.174 09:50:53 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:33:26.174 00:33:26.174 real 4m17.833s 00:33:26.174 user 4m7.214s 00:33:26.174 sys 0m11.456s 00:33:26.174 09:50:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:26.174 09:50:53 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:33:26.174 ************************************ 00:33:26.174 END TEST ftl_restore_fast 00:33:26.174 ************************************ 00:33:26.174 09:50:53 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:33:26.174 09:50:53 ftl -- ftl/ftl.sh@14 -- # killprocess 87915 00:33:26.174 09:50:53 ftl -- common/autotest_common.sh@954 -- # '[' -z 87915 ']' 00:33:26.174 09:50:53 ftl -- common/autotest_common.sh@958 -- # kill -0 87915 00:33:26.174 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (87915) - No such process 00:33:26.174 Process with pid 87915 is not found 00:33:26.174 09:50:53 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 87915 is not found' 00:33:26.174 09:50:53 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:33:26.174 09:50:53 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=98856 00:33:26.174 09:50:53 ftl -- ftl/ftl.sh@20 -- # waitforlisten 98856 00:33:26.174 09:50:53 ftl -- common/autotest_common.sh@835 -- # '[' -z 98856 ']' 00:33:26.174 09:50:53 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:26.174 09:50:53 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:26.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:26.174 09:50:53 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:26.174 09:50:53 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:26.174 09:50:53 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:26.174 09:50:53 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:26.174 [2024-11-29 09:50:53.485342] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:33:26.174 [2024-11-29 09:50:53.485463] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98856 ] 00:33:26.174 [2024-11-29 09:50:53.617135] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:26.174 [2024-11-29 09:50:53.642743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:26.174 [2024-11-29 09:50:53.660208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:26.755 09:50:54 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:26.755 09:50:54 ftl -- common/autotest_common.sh@868 -- # return 0 00:33:26.755 09:50:54 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:33:27.012 nvme0n1 00:33:27.012 09:50:54 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:33:27.012 09:50:54 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:27.012 09:50:54 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:33:27.270 09:50:54 ftl -- ftl/common.sh@28 -- # stores=a33f9ce0-c8c5-474d-bcd4-0cfe4b2b0f11 00:33:27.270 09:50:54 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:33:27.270 09:50:54 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a33f9ce0-c8c5-474d-bcd4-0cfe4b2b0f11 00:33:27.270 09:50:54 ftl -- ftl/ftl.sh@23 -- # killprocess 98856 00:33:27.270 09:50:54 ftl -- common/autotest_common.sh@954 -- # '[' -z 98856 ']' 00:33:27.270 09:50:54 ftl -- common/autotest_common.sh@958 -- # kill -0 98856 00:33:27.270 09:50:54 ftl -- common/autotest_common.sh@959 -- # uname 00:33:27.270 09:50:54 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:27.270 09:50:54 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98856 00:33:27.530 09:50:55 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:27.530 09:50:55 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:27.530 killing process with pid 98856 00:33:27.530 09:50:55 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98856' 00:33:27.530 09:50:55 ftl -- common/autotest_common.sh@973 -- # kill 98856 00:33:27.530 09:50:55 ftl -- common/autotest_common.sh@978 -- # wait 98856 00:33:27.530 09:50:55 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:33:27.788 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:27.788 Waiting for block devices as requested 00:33:27.788 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:33:28.045 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:33:28.045 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:33:28.045 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:33:33.303 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:33:33.303 09:51:00 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:33:33.303 Remove shared memory files 00:33:33.303 09:51:00 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:33.303 09:51:00 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:33:33.303 09:51:00 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:33:33.303 09:51:00 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:33:33.303 09:51:00 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:33.303 09:51:00 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:33:33.303 00:33:33.303 real 16m23.581s 00:33:33.303 user 18m11.110s 00:33:33.303 sys 1m19.362s 00:33:33.303 09:51:00 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:33.303 09:51:00 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:33.303 ************************************ 00:33:33.303 END TEST ftl 00:33:33.303 ************************************ 00:33:33.303 09:51:00 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:33:33.303 09:51:00 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:33:33.303 09:51:00 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:33:33.303 09:51:00 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:33:33.303 09:51:00 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:33:33.303 09:51:00 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:33:33.303 09:51:00 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:33:33.303 09:51:00 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:33:33.303 09:51:00 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:33:33.303 09:51:00 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:33:33.304 09:51:00 -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:33.304 09:51:00 -- common/autotest_common.sh@10 -- # set +x 00:33:33.304 09:51:00 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:33:33.304 09:51:00 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:33:33.304 09:51:00 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:33:33.304 09:51:00 -- common/autotest_common.sh@10 -- # set +x 00:33:34.237 INFO: APP EXITING 00:33:34.237 INFO: killing all VMs 00:33:34.237 INFO: killing vhost app 00:33:34.237 INFO: EXIT DONE 00:33:34.809 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:35.071 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:33:35.071 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:33:35.071 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:33:35.071 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:33:35.331 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:35.589 Cleaning 00:33:35.589 Removing: /var/run/dpdk/spdk0/config 00:33:35.589 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:35.589 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:35.589 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:35.589 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:35.848 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:35.848 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:35.848 Removing: /var/run/dpdk/spdk0 00:33:35.848 Removing: /var/run/dpdk/spdk_pid70776 00:33:35.848 Removing: /var/run/dpdk/spdk_pid70939 00:33:35.848 Removing: /var/run/dpdk/spdk_pid71141 00:33:35.848 Removing: /var/run/dpdk/spdk_pid71228 00:33:35.848 Removing: /var/run/dpdk/spdk_pid71251 00:33:35.848 Removing: /var/run/dpdk/spdk_pid71363 00:33:35.848 Removing: /var/run/dpdk/spdk_pid71381 00:33:35.848 Removing: /var/run/dpdk/spdk_pid71558 00:33:35.848 Removing: /var/run/dpdk/spdk_pid71631 00:33:35.848 Removing: /var/run/dpdk/spdk_pid71716 00:33:35.848 Removing: /var/run/dpdk/spdk_pid71811 00:33:35.848 Removing: /var/run/dpdk/spdk_pid71891 00:33:35.848 Removing: /var/run/dpdk/spdk_pid71926 00:33:35.848 Removing: /var/run/dpdk/spdk_pid71962 00:33:35.848 Removing: /var/run/dpdk/spdk_pid72032 00:33:35.848 Removing: /var/run/dpdk/spdk_pid72133 00:33:35.848 Removing: /var/run/dpdk/spdk_pid72552 00:33:35.848 Removing: /var/run/dpdk/spdk_pid72600 00:33:35.848 Removing: /var/run/dpdk/spdk_pid72641 00:33:35.848 Removing: /var/run/dpdk/spdk_pid72657 00:33:35.848 Removing: /var/run/dpdk/spdk_pid72715 00:33:35.848 Removing: /var/run/dpdk/spdk_pid72731 00:33:35.848 Removing: /var/run/dpdk/spdk_pid72789 00:33:35.848 Removing: /var/run/dpdk/spdk_pid72805 00:33:35.848 Removing: /var/run/dpdk/spdk_pid72847 00:33:35.848 Removing: /var/run/dpdk/spdk_pid72865 00:33:35.848 Removing: /var/run/dpdk/spdk_pid72907 00:33:35.848 Removing: /var/run/dpdk/spdk_pid72925 00:33:35.848 Removing: /var/run/dpdk/spdk_pid73052 00:33:35.848 Removing: /var/run/dpdk/spdk_pid73083 00:33:35.848 Removing: /var/run/dpdk/spdk_pid73172 00:33:35.848 Removing: /var/run/dpdk/spdk_pid73333 00:33:35.848 Removing: /var/run/dpdk/spdk_pid73395 00:33:35.848 Removing: /var/run/dpdk/spdk_pid73426 00:33:35.848 Removing: /var/run/dpdk/spdk_pid73837 00:33:35.848 Removing: /var/run/dpdk/spdk_pid73932 00:33:35.848 Removing: /var/run/dpdk/spdk_pid74041 00:33:35.848 Removing: /var/run/dpdk/spdk_pid74083 00:33:35.848 Removing: /var/run/dpdk/spdk_pid74103 00:33:35.848 Removing: /var/run/dpdk/spdk_pid74187 00:33:35.848 Removing: /var/run/dpdk/spdk_pid74796 00:33:35.848 Removing: /var/run/dpdk/spdk_pid74827 00:33:35.848 Removing: /var/run/dpdk/spdk_pid75285 00:33:35.848 Removing: /var/run/dpdk/spdk_pid75378 00:33:35.848 Removing: /var/run/dpdk/spdk_pid75487 00:33:35.848 Removing: /var/run/dpdk/spdk_pid75529 00:33:35.848 Removing: /var/run/dpdk/spdk_pid75549 00:33:35.848 Removing: /var/run/dpdk/spdk_pid75569 00:33:35.848 Removing: /var/run/dpdk/spdk_pid77408 00:33:35.848 Removing: /var/run/dpdk/spdk_pid77529 00:33:35.848 Removing: /var/run/dpdk/spdk_pid77533 00:33:35.848 Removing: /var/run/dpdk/spdk_pid77550 00:33:35.848 Removing: /var/run/dpdk/spdk_pid77596 00:33:35.848 Removing: /var/run/dpdk/spdk_pid77600 00:33:35.848 Removing: /var/run/dpdk/spdk_pid77612 00:33:35.848 Removing: /var/run/dpdk/spdk_pid77657 00:33:35.848 Removing: /var/run/dpdk/spdk_pid77661 00:33:35.848 Removing: /var/run/dpdk/spdk_pid77673 00:33:35.848 Removing: /var/run/dpdk/spdk_pid77718 00:33:35.848 Removing: /var/run/dpdk/spdk_pid77722 00:33:35.848 Removing: /var/run/dpdk/spdk_pid77734 00:33:35.848 Removing: /var/run/dpdk/spdk_pid79121 00:33:35.848 Removing: /var/run/dpdk/spdk_pid79207 00:33:35.848 Removing: /var/run/dpdk/spdk_pid80605 00:33:35.848 Removing: /var/run/dpdk/spdk_pid82362 00:33:35.848 Removing: /var/run/dpdk/spdk_pid82419 00:33:35.848 Removing: /var/run/dpdk/spdk_pid82492 00:33:35.848 Removing: /var/run/dpdk/spdk_pid82585 00:33:35.848 Removing: /var/run/dpdk/spdk_pid82675 00:33:35.848 Removing: /var/run/dpdk/spdk_pid82760 00:33:35.848 Removing: /var/run/dpdk/spdk_pid82823 00:33:35.848 Removing: /var/run/dpdk/spdk_pid82887 00:33:35.848 Removing: /var/run/dpdk/spdk_pid82986 00:33:35.848 Removing: /var/run/dpdk/spdk_pid83066 00:33:35.848 Removing: /var/run/dpdk/spdk_pid83151 00:33:35.848 Removing: /var/run/dpdk/spdk_pid83214 00:33:35.848 Removing: /var/run/dpdk/spdk_pid83278 00:33:35.848 Removing: /var/run/dpdk/spdk_pid83377 00:33:35.848 Removing: /var/run/dpdk/spdk_pid83462 00:33:35.848 Removing: /var/run/dpdk/spdk_pid83548 00:33:35.848 Removing: /var/run/dpdk/spdk_pid83600 00:33:35.848 Removing: /var/run/dpdk/spdk_pid83668 00:33:35.848 Removing: /var/run/dpdk/spdk_pid83769 00:33:35.848 Removing: /var/run/dpdk/spdk_pid83850 00:33:35.848 Removing: /var/run/dpdk/spdk_pid83940 00:33:35.848 Removing: /var/run/dpdk/spdk_pid83992 00:33:35.848 Removing: /var/run/dpdk/spdk_pid84061 00:33:35.848 Removing: /var/run/dpdk/spdk_pid84124 00:33:35.848 Removing: /var/run/dpdk/spdk_pid84194 00:33:35.848 Removing: /var/run/dpdk/spdk_pid84290 00:33:35.848 Removing: /var/run/dpdk/spdk_pid84371 00:33:35.848 Removing: /var/run/dpdk/spdk_pid84460 00:33:35.848 Removing: /var/run/dpdk/spdk_pid84518 00:33:35.848 Removing: /var/run/dpdk/spdk_pid84581 00:33:35.848 Removing: /var/run/dpdk/spdk_pid84654 00:33:35.848 Removing: /var/run/dpdk/spdk_pid84718 00:33:35.848 Removing: /var/run/dpdk/spdk_pid84816 00:33:35.848 Removing: /var/run/dpdk/spdk_pid84897 00:33:35.848 Removing: /var/run/dpdk/spdk_pid85035 00:33:35.848 Removing: /var/run/dpdk/spdk_pid85308 00:33:35.848 Removing: /var/run/dpdk/spdk_pid85328 00:33:35.848 Removing: /var/run/dpdk/spdk_pid85787 00:33:35.848 Removing: /var/run/dpdk/spdk_pid85965 00:33:35.848 Removing: /var/run/dpdk/spdk_pid86055 00:33:35.848 Removing: /var/run/dpdk/spdk_pid86148 00:33:35.848 Removing: /var/run/dpdk/spdk_pid86192 00:33:35.848 Removing: /var/run/dpdk/spdk_pid86213 00:33:35.848 Removing: /var/run/dpdk/spdk_pid86522 00:33:35.848 Removing: /var/run/dpdk/spdk_pid86555 00:33:35.848 Removing: /var/run/dpdk/spdk_pid86611 00:33:35.848 Removing: /var/run/dpdk/spdk_pid86973 00:33:35.848 Removing: /var/run/dpdk/spdk_pid87119 00:33:35.848 Removing: /var/run/dpdk/spdk_pid87915 00:33:35.848 Removing: /var/run/dpdk/spdk_pid88036 00:33:36.107 Removing: /var/run/dpdk/spdk_pid88189 00:33:36.107 Removing: /var/run/dpdk/spdk_pid88281 00:33:36.107 Removing: /var/run/dpdk/spdk_pid88567 00:33:36.107 Removing: /var/run/dpdk/spdk_pid88809 00:33:36.107 Removing: /var/run/dpdk/spdk_pid89152 00:33:36.107 Removing: /var/run/dpdk/spdk_pid89306 00:33:36.107 Removing: /var/run/dpdk/spdk_pid89477 00:33:36.107 Removing: /var/run/dpdk/spdk_pid89513 00:33:36.107 Removing: /var/run/dpdk/spdk_pid89722 00:33:36.107 Removing: /var/run/dpdk/spdk_pid89745 00:33:36.107 Removing: /var/run/dpdk/spdk_pid89787 00:33:36.107 Removing: /var/run/dpdk/spdk_pid90034 00:33:36.107 Removing: /var/run/dpdk/spdk_pid90242 00:33:36.107 Removing: /var/run/dpdk/spdk_pid90820 00:33:36.107 Removing: /var/run/dpdk/spdk_pid91614 00:33:36.107 Removing: /var/run/dpdk/spdk_pid92380 00:33:36.107 Removing: /var/run/dpdk/spdk_pid92710 00:33:36.107 Removing: /var/run/dpdk/spdk_pid92838 00:33:36.107 Removing: /var/run/dpdk/spdk_pid92914 00:33:36.107 Removing: /var/run/dpdk/spdk_pid93270 00:33:36.107 Removing: /var/run/dpdk/spdk_pid93327 00:33:36.107 Removing: /var/run/dpdk/spdk_pid93973 00:33:36.107 Removing: /var/run/dpdk/spdk_pid94302 00:33:36.107 Removing: /var/run/dpdk/spdk_pid95268 00:33:36.107 Removing: /var/run/dpdk/spdk_pid95391 00:33:36.108 Removing: /var/run/dpdk/spdk_pid95427 00:33:36.108 Removing: /var/run/dpdk/spdk_pid95491 00:33:36.108 Removing: /var/run/dpdk/spdk_pid95540 00:33:36.108 Removing: /var/run/dpdk/spdk_pid95604 00:33:36.108 Removing: /var/run/dpdk/spdk_pid95807 00:33:36.108 Removing: /var/run/dpdk/spdk_pid95886 00:33:36.108 Removing: /var/run/dpdk/spdk_pid95943 00:33:36.108 Removing: /var/run/dpdk/spdk_pid96043 00:33:36.108 Removing: /var/run/dpdk/spdk_pid96069 00:33:36.108 Removing: /var/run/dpdk/spdk_pid96126 00:33:36.108 Removing: /var/run/dpdk/spdk_pid96254 00:33:36.108 Removing: /var/run/dpdk/spdk_pid96442 00:33:36.108 Removing: /var/run/dpdk/spdk_pid97028 00:33:36.108 Removing: /var/run/dpdk/spdk_pid98039 00:33:36.108 Removing: /var/run/dpdk/spdk_pid98553 00:33:36.108 Removing: /var/run/dpdk/spdk_pid98856 00:33:36.108 Clean 00:33:36.108 09:51:03 -- common/autotest_common.sh@1453 -- # return 0 00:33:36.108 09:51:03 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:33:36.108 09:51:03 -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:36.108 09:51:03 -- common/autotest_common.sh@10 -- # set +x 00:33:36.108 09:51:03 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:33:36.108 09:51:03 -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:36.108 09:51:03 -- common/autotest_common.sh@10 -- # set +x 00:33:36.108 09:51:03 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:33:36.108 09:51:03 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:33:36.108 09:51:03 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:33:36.108 09:51:03 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:33:36.108 09:51:03 -- spdk/autotest.sh@398 -- # hostname 00:33:36.108 09:51:03 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:33:36.366 geninfo: WARNING: invalid characters removed from testname! 00:34:02.915 09:51:28 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:04.827 09:51:32 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:07.370 09:51:34 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:09.924 09:51:37 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:12.467 09:51:40 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:15.012 09:51:42 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:17.561 09:51:45 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:17.823 09:51:45 -- spdk/autorun.sh@1 -- $ timing_finish 00:34:17.823 09:51:45 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:34:17.823 09:51:45 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:17.823 09:51:45 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:34:17.823 09:51:45 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:17.823 + [[ -n 5772 ]] 00:34:17.823 + sudo kill 5772 00:34:17.832 [Pipeline] } 00:34:17.847 [Pipeline] // timeout 00:34:17.852 [Pipeline] } 00:34:17.866 [Pipeline] // stage 00:34:17.871 [Pipeline] } 00:34:17.885 [Pipeline] // catchError 00:34:17.893 [Pipeline] stage 00:34:17.895 [Pipeline] { (Stop VM) 00:34:17.908 [Pipeline] sh 00:34:18.191 + vagrant halt 00:34:20.794 ==> default: Halting domain... 00:34:30.805 [Pipeline] sh 00:34:31.088 + vagrant destroy -f 00:34:33.631 ==> default: Removing domain... 00:34:34.215 [Pipeline] sh 00:34:34.500 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:34:34.510 [Pipeline] } 00:34:34.525 [Pipeline] // stage 00:34:34.529 [Pipeline] } 00:34:34.543 [Pipeline] // dir 00:34:34.548 [Pipeline] } 00:34:34.562 [Pipeline] // wrap 00:34:34.568 [Pipeline] } 00:34:34.582 [Pipeline] // catchError 00:34:34.592 [Pipeline] stage 00:34:34.595 [Pipeline] { (Epilogue) 00:34:34.610 [Pipeline] sh 00:34:34.897 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:41.494 [Pipeline] catchError 00:34:41.496 [Pipeline] { 00:34:41.510 [Pipeline] sh 00:34:41.794 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:41.795 Artifacts sizes are good 00:34:41.805 [Pipeline] } 00:34:41.821 [Pipeline] // catchError 00:34:41.832 [Pipeline] archiveArtifacts 00:34:41.840 Archiving artifacts 00:34:42.001 [Pipeline] cleanWs 00:34:42.013 [WS-CLEANUP] Deleting project workspace... 00:34:42.013 [WS-CLEANUP] Deferred wipeout is used... 00:34:42.019 [WS-CLEANUP] done 00:34:42.021 [Pipeline] } 00:34:42.036 [Pipeline] // stage 00:34:42.040 [Pipeline] } 00:34:42.055 [Pipeline] // node 00:34:42.061 [Pipeline] End of Pipeline 00:34:42.105 Finished: SUCCESS